On Wed Feb 19 02:20:46 2014, zefram@fysh.org wrote: > > This is a very unusual pattern of memory usage, and it's not worth > optimising for it. More commonly these quickly-freed SVs would be > interspersed with some that have a longer lifetime and so prevent the > freeing of that block of pages. > > -zefram > I disagree. If you allocate an AV with atleast (2^16)/16 = 4096 slices/SV*s, then free the AV. The memory won't be released. There are numerous public complaints through out Perl 5's history about "perl never releases memory" to the OS. Loading large data files (>100 KB) like CSVs, JSON, and the like, into perl data structures and transforming or searching them, with the operation taking atleast 1 second of full throttle I/O and/or CPU usage, is a common operation. To address what I think will be the response. Processing each data file/task in a fork is not a solution. It is a bandaid to a design bug in Perl. Perl should be able to goto 1 or 2 GB of memory to process a huge data set, then scale back down to, below, 150% of the memory usage before the task/data set job began. Not stay at 1 GB or 400% even though all variables/data belonging to the task were freed on a PP level. -- bulk88 ~ bulk88 at hotmail.com --- via perlbug: queue: perl5 status: open https://rt.perl.org/Ticket/Display.html?id=121274Thread Previous | Thread Next