Nicholas Clark wrote: >> [...] > > Also, you mentioned Encode and the files it generated. I found a lot of scope > for optimisation within enc2xs, which dramatically decreased its memory use > and run time, without needing large fundamental design changes to how it > worked. And I managed that with only Devel::DProf. I'm curious what can be > achieved with Devel::NYTProf on mktables. (Which is a task that is within > the skill set of any of the 600 subscribers to this list. I'm hoping that it > might appeal to at least one.) So, I tried it with NYTProf. As I expected (I had used DProf earlier), the highest usage subroutine was my pure Perl version of Scalar::Util::refaddr, reproduced below. A third of the total time was spent in this routine. (This is required because miniperl doesn't do dynamic loading, so refaddr is not available.) When I was writing mktables, I was under the impression that refaddr would be brought into the core for 5.12. There was an agreement to that effect, but I guess no one ever got around to actually doing it. When I run this under perl instead of miniperl, and change objaddr to just return refaddr, the combination still takes quite a lot of time. If refaddr were in the core would it be in-lined? I found a few surprises; I haven't pored over the results, though. One is that I left in a trace statement that got to the trace subroutine before discovering that it had nothing to do. This added not very much time. Based on looking at existing code in utf8_heavy.pl, I had presumed that the Perl optimizer would remove code that depended on a constant subroutine that returns false. That is, 'foo if DEBUG' would be optimized away if there was a line: 'sub Debug { 0 }' But that appears to not be the case. There were more string evals than I expected, though the total time did not add up to all that much. I couldn't find a way in NYTProf to highlight those. There are two columns in the nytprofhtml output for subroutines that I can't figure out what they mean, and saw no documentation for, 'P' and 'F'. I also did not see anything there for memory usage. I don't know how Perl handles using up too much memory. The old mktables kept all its tables in memory, and so I felt free to do so as well. But the new mktables handles quite a few more tables than the old one. I would think you would get thrashing if the memory usage got too big. I wonder if Steve's machine doesn't have much memory. When I run mktables using perl instead of miniperl, I get execution times between 30 and 40 seconds.Thread Previous | Thread Next