On 26 October 2016 at 13:18, Dave Mitchell <davem@iabyn.com> wrote: > On Mon, Oct 24, 2016 at 03:34:02PM +0200, demerphq wrote: >> On 24 October 2016 at 15:28, Dan Collins <dcollinsn@gmail.com> wrote: >> > This seems reasonable. I had expected "fresh_perl" to hide those failures, >> > but that seems to be /very/ platform dependent. Let's just skip that test >> > altogether. >> >> What about just fixing it? >> >> If we say that you cannot trigger an overload for an object inside an >> overload handler that was called by overload to handle that object >> then we could prevent this type of error. >> >> IOW before every overload call we would check a global hash to see if >> the object was in the hash, if it was then we would die with an >> exception. If it was not then we would add it to the hash. When the >> overload method we called exited we would remove the object from the >> hash. (We could also attach some kind of attribute to the object to >> say it was inside of an overload call.) >> >> It would slow down overload methods a touch, but it would not segfault. > > We would have to also make sure the hash entry was removed if we die. Yes, we would have to do the equivalent of local $hash{ref($code).$method}= 1; > Also, can it ever be reasonable to call an overload handler recursively > on the same object? Actually that isn't the right question. Is it ever reasonable to call an overload handler /via overload/ recursively? (And perhaps we only want to forbid object+method) The problem with this kind of thing is that it is infinite recursion via the C stack, so if we can force the developer out of using the C stack for this kind of thing, and into using normal call mechanism then they don't lose functionality, and we dont have to worry about C stack overflow. And IMO adding a restriction like "it is an error to trigger an overload method call within the handler for that overload method" does not imply any practical restriction to the programmer, they can call the overload method handlers as normal if they wish, just not via overload. My guess is that most people who have ended up writing overload code that recurses has always had a headache and us imposing the restriction at the core level would break little code. FWIW, we have seen issues like this in the past, for instance, IIRC Carp has had various issues with overload and infinite loops fixed or worked around over the years. (Using Carp inside of an overload handler used to trigger infinite loops when carp would try to stringify the arguments from the call stacks, triggering further overload operations.) So IMO making this a trappable exception with sane semantics is better than the current behavior. Of course there is also the option of figuring out how to avoid this problem with mixed C/Perl call stacks. :-) But I guess that is a much harder problem. :-) cheers Yves -- perl -Mre=debug -e "/just|another|perl|hacker/"Thread Previous | Thread Next