On Mon Sep 01 12:43:17 2014, rurban wrote: > READ of size 6 at 0x60200004ec15 thread T0 > #0 0x454c58 in __interceptor_memcmp > (/home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/perl+0x454c58) > #1 0x7f7f605aebc0 in modify_SV_attributes > /home/rurban/Perl/src/build-5.21.4d-nt- > asan@32818149/ext/attributes/attributes.xs:100 > #2 0x7f7f605a0042 in XS_attributes__modify_attrs > /home/rurban/Perl/src/build-5.21.4d-nt- > asan@32818149/ext/attributes/attributes.xs:134 > #3 0x7f7f64c97653 in Perl_pp_entersub > /home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/pp_hot.c:2784 > #4 0x7f7f6498ac8a in Perl_runops_debug > /home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/dump.c:2358 > #5 0x7f7f642d077b in S_run_body > /home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/perl.c:2408 > #6 0x7f7f642cc3f2 in perl_run > /home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/perl.c:2336 > #7 0x47b95f in main > /home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/perlmain.c:114 > #8 0x7f7f63114b44 (/lib/x86_64-linux-gnu/libc.so.6+0x21b44) > #9 0x47b2dc in _start > (/home/rurban/Perl/src/build-5.21.4d-nt-asan@32818149/perl+0x47b2dc) > > This is caused by a wrong memcmp in attributes.xs:100 > patch attached That patch also includes a change unrelated to attributes.xs. I plan to apply the attached subset of that patch. That said, that little change did seem to make a difference in performance on both Linux and darwin, but the problem is without knowing why, we can't tell if it will degrade performance on other operating systems, or even on different versions of the same operating systems. As to your original patch, I got the impression from the "All gone" thread that you'd prefer that we only did our own exponential allocation growth on systems that require it - Win32 in particular. Can you think of a Configure test we could do to detect systems that need that? I tried a simple program: int main() { size_t sz = 100; void *p = malloc(sz); void *g = malloc(sz); size_t schanges = 0, bchanges = 0; for (; sz < 1000; ++sz) { void *np = realloc(p, sz); if (np == NULL) exit(1); if (np != p) { ++schanges; p = np; free(g); g = malloc(sz); } } for (;sz < 10000; ++sz) { void *np = realloc(p, sz); if (np == NULL) exit(1); if (np != p) { ++bchanges; p = np; free(g); g = malloc(sz); } } free(p); printf("s %lu b %lu\n", (unsigned long)schanges, (unsigned long)bchanges); return 0; } which produced: Win32: s 1 b 7 Cygwin: s 1 b 0 Linux (Debian): s 3 b 0 With larger sizes the results were a bit less slanted towards Win32 being bad, and Cygwin/Linux being good, eg. for start 1000, second 100000, third 1000000: Win32: s 16 b 17 Cygwin: s 3 b 14 Linux: s 1 b 5 but I suspect the larger sizes gets into sizes where the implementation uses page mapping rather than memcpy() to move memory around when the block needs to be moved. Tony --- via perlbug: queue: perl5 status: open https://rt.perl.org/Ticket/Display.html?id=122629Thread Previous | Thread Next