develooper Front page | perl.perl6.language | Postings from February 2001

Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

Jan Dubois
February 11, 2001 18:00
Re: Garbage collection (was Re: JWZ on s/Java/Perl/)
Message ID:
On Fri, 09 Feb 2001 13:19:36 -0500, Dan Sugalski <> wrote:

>Almost all refcounting schemes are messy. That's one of its problems. A 
>mark and sweep GC system tends to be less prone to leaks because of program 
>bugs, and when it *does* leak, the leaks tend to be large. Plus the code to 
>do the GC work is very localized, which tends not to be the case in 
>refcounting schemes.
>Going to a more advanced garbage collection scheme certainly isn't a 
>universal panacea--mark and sweep in perl 6 will *not* bring about world 
>peace or anything. It will (hopefully) make our lives easier, though.

I currently don't have much time to follow the perl6 discussions, so I
might have missed this, but I have some questions about abandoning
reference counts for Perl internals.  When I reimplemented some of the
Perl guts in C# last year for the 'Perl for .NET" research project, I
tried to get rid of reference counting because the runtime already
provides a generational garbage collection scheme.

However, I couldn't solve the problem of "deterministic destruction
behavior": Currently Perl will call DESTROY on any object as soon as the
last reference to it goes out of scope.  This becomes important if the
object own scarce external resources (e.g. file handles or database
connections) that are only freed during DESTROY.  Postponing DESTROY until
an indeterminate time in the future can lead to program failures due to
resource exhaustion.

The second problem is destruction order:  With reference counts you can
have a dependency graph between objects.  Without them destruction can
only appear in random order, which sometimes is a problem: You may have a
database connection and a recordset.  The recordset may need to be
DESTROYed first because it may contain unsaved data that still needs to be
written back to the database.

I've been discussing this with Sarathy multiple times over the last year,
and he insists that relying on DESTROY for resource cleanup is bad style
and shouldn't be done anyways.  But always explicitly calling e.g. Close()
or whatever is pretty messy at the application level: you have to use
eval{} blocks all over the place to guarantee calling Close() even when
something else blows up.

As an implementer I most definitely see the advantages of giving up
deterministic destruction behavior to random sequences of finalizer calls.
But as a Perl programmer I loathe the additional complexity for my Perl
programs to make them robust.  There is a reason memory allocation isn't
exposed to the user either. :-)

Have these issues been discussed somewhere for Perl6?  If yes, could you
point me to that discussion?

-Jan Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at | Group listing | About