develooper Front page | perl.perl5.porters | Postings from March 2013

[perl #117259] Re: Bleadperl v5.17.9-200-g0e0ab62 breaks MLEHMANN/JSON-XS-2.33.tar.gz

Thread Next
schmorp @ schmorp . de
March 21, 2013 13:28
[perl #117259] Re: Bleadperl v5.17.9-200-g0e0ab62 breaks MLEHMANN/JSON-XS-2.33.tar.gz
Message ID:
# New Ticket Created by 
# Please include the string:  [perl #117259]
# in the subject line of all future correspondence about this issue. 
# <URL: >

On Thu, Mar 21, 2013 at 12:28:54PM +0100, demerphq <> wrote:
+-> > >> > - Unless I am mistaken, this is to avoid a DOS attack. DOS attacks are easy
+-> > >> >   to protect against, and this won't protect me against basically any form
+-> > >> >   of attack, so it's useless. Far better methods exist that prevent every
+-> > >> >   form of memory resource and/or cpu time starvation.
|   > >>
|   > >> You are mistaken.
|   > >
|   > > You are really claiming that this is one of the best methods to prevent
|   > > memory resource and/or cpu time starvation? Because that's what the satteemnt
|   > > you quoted says.
|   > 
|   > I am sorry, I seem to be missing some context here. What statement did
|   > I quote? I just checked this thread I did not quote anything.
+-- These ones (hope you can read my ascii art).

I interpreted it to mean to refer to the last one you quoted.

Now, if you didn't quote anything, what would your statement refer to?

> The purpose of hash iterator randomization is make it harder, and
> hopefully outright impossible to discover the hash seed a given
> process is using.

That in itself is obviously a no-goal. No, I don't believe that is the real
purpose. It must be something else, like: "to make it harder to use DOS
attacks exploiting knowledge about hash collisions".

Or something like that.

In case you wonder why it is a no-goal, I can discover these hash seeds
with a debugger, or an XS module, and I doubt your change will stop me
from doing it (to the extent this still exists).

> Once an attacker can determine the hash seed they can launch
> algorithmic complexity attacks on our hash implementation. However the
> iterator randomization does not in any way prevent a DOS, it prevents
> obtaining the information that makes a DOS possible.

I think you are splitting hairs, but you get it wrong.

The purpose of your change is to make it harder to apply such DOS attacks.
Saying that's not the purpose, but the purpose is just to hide this
information is dishonest, because hiding this information isn't a useful
purpose in itself.

So it seems I was completely right after all.

Thank you for spelling it out for me to verify.

> > The perlfunc 5.16 documentation on "keys". Older versions are even more
> > explicit.
> >
> > all them are ambiguous, but I already mentioned that.
> Well, I beg to differ. I think you are conflating "does not say anything
> on the subject" and "says something ambiguous".

I think it strongly implies it. Beat me if you wish, but your
interpretation is as good as mine, regardless of the intent behind it.

> The documentation for keys(), and as far as I know all other
> documentation relating to hashes says nothing about consistency,
> either cross process or in between hashes.

Sorry, but "guaranteed to be the same as either the keys ..." clearly
implies some consistency, at least between those. Also saying the order
is the same for the "same hash" is ambiguous, as "same" has tow meanings,
namely, "identical" (of same identity) and "similar in some kind" (e.g.
same contents).

> It says that the keys are returned in an apparently random order, that
> the random order is subject to change in future versions of perl,

No problem with that.

> the only guarantee of any form provided is that the keys() function
> will return items in the same order that each() and values would return
> them assuming the hash has not been modified (a secondary clause in the
> documentation for "each" specifies it is safe to delete the key that was
> just returned by each()).

It also says the ordering is the same for the same hash.

I know how it's meant, but I also know what is said, and assuming that
"the same hash will result in the same ordering" is not unreasonable.

Note that the earlier randomisation patch already broke the semantics, as
the ordering changed even within the same version (but different program

> Furthermore I cannot find any evidence that the wording for this has
> changed in terms of specificity in the lifetime of perlfunc.pod:

Saying it changes in future versions but is the same for the same hash
STRONGLY implies that the ordering is the same for the same version of
perl, which is already not true.

I give you, as I said, that the wording is ambiguous, and that it is
actually an older change that broke it, but this interprettaion is
entirely valid, and I am not the only one who used it.

> perlfunc.pod didnt exist prior to that commit.

And it clearly changed, if you took notice.

> I see nothing ambiguous in the documentation, nor do I see it becoming
> more or less specific over time.

Saying you don'T se something has no value whatsoever. People tend to chose
one interpretation of another.

You'd have to make the stronger claim that no other interprettaion is
reasonably possible, and I don't think you can, simply because "same", in
english, is ambiguous as to whether mean identity or equality.

> > I don't think that, and haven't made that claim. I think you can make
> > faster hashes with better properties without the drawbacks.
> So then why did you bring it up in this thread? If you have issues
> with hash performance and think you can do better then please start a
> new thread, preferably by posting patches.

> I don't think that, and haven't made that claim. I think you can make

I said it before, and Is aid it again: I didn't bring up what you put into my
mouth, and I explain it again what I meant, and this time, please either
believe what I write, or argue against it, but please don't attack strawmen:

I think one can make faster and better hashes which fix this problem without
breaking code.

That means the breakage is not necessary.

> >> > - Taking out the inability to write a proper hash on all the perl developers
> >> >   out there is just wrong - Perl is the runtime, and should get it right,
> >> >   if possible.
> >>
> >> This doesn't make grammatical sense,
> >
> > I think it's a perfectly fine english sentence. Maybe a bit complicated for
> > you to understand though.
> I guess you meant "your" when you said "the".

No, I specifically meant "the", because I didn't want to say "your".

> If so then I will just say that I find this comment rude and
> uniformed.

Since your assumption is wrong, your conclusion is irrelevant fprtunately.

> I am not responsible for our current hash implementation,

And I didn't claim so. On purpose, in fact.

> nor do I think you could do any better. I am pretty sure that the only
> way one can improve perls hash performance is to make it do less, and
> that by doing so you would upset far more people than you would make
> happy by making it a nanosecond faster.
> But please, go ahead and prove me wrong. Post some patches.

I already did some benchmarks with unordered hashes. I wonder what they
do less than perl hashes, specially now that almost all conssitency in
ordering is gone, which is usually the only problem.

> > I am trying to say that the solution for bad hash tables is better hash
> > tables, or a sense of proportion, not breaking perfectly fine existing
> > code.
> I find it really hard to take you seriously given how often you make
> silly statements like "not breaking perfectly fine existing code".

If you want me to respect your opinions on what is fine code, please
respect mine.

> your code broke because of this change it is because you were making
> unwarranted assumptions about how perl's hash function operated,

I don't know if they are unwarranted, but at least they were documented,
even if that turns out to be ambiguous wording.

> as such your code wasn't perfectly fine. In fact if your code is
> breaking due to this change then I could trivially contrive code in an
> older version of perl that would also break your assumptions.

Well, the code in question (which defines my assumptions) works with older
versions of perl, so I call your bluff.

> > To me, breaking existing code for questionable or no reason is making it
> > worse. It is obvious that you disagree and think you didn't make it worse,
> > but at best, you have to disagree with my perfectly valid opinion on this
> > topic-
> I have to disagree? What? Did you mean "agree"?

No, I mean "disagree".

You seem to be very combative, and I guess this is why you make so many
simple mistakes: I usually really mean what I write (and even then, I do make

I _really_ mean that you disagree, and I really mean that you have to
disagree with me if you have another opinion.

> Anyway, it doesnt matter. I don't break existing code for no reason.

Sure. I am saying they are bad reasons, and badly executed, not that it is
without reason.

> A) I have reasons,

Never said anything to the contrary.

> B) any code broken was already broken but the author did not know it

Empty claim.

> C) there is a reason the docs say "The actual random order is subject to
> change in future versions of perl"

The actual random ordering changes with each make test run, too, but the
testsuite still passes. That is cleatrly not the bug here.

> and make no other guarantees that the limited ones I enumerated
> above. The reason of course is to allow the hash function to be changed
> or improved over time.

It says the ordering is the same for the same hash.

Changing the hash function doesn't break the code. Changing the hash
function within the same version of perl, and within the same process, is
what changes it.

At this point I must actually admit that I didn't even check that this is
true, I only assume it.

So if the ordering is really the same for the same hash, within the same
process (using the same perl), then I am indeed totally wrong.

Is that the case?

> > I am fully aware of them being volunteers, but so am I (and I am a regular
> > contributor as well), please keep that in mind.
> Perhaps you are, I never advanced an opinion on the subject.

Well, I am. I tell you so. What are your reasons do doubt me so much?

> > The perl5porters are not "better" than me as a major contributor to perl
> > in any way, and have no higher moral ground. Acting uppity because you are
> > part of a mailinglist and contribute more often than me is proving nothing
> > really.
> I never made any comments about anyones individuals merits. Nor do I
> know what I did that was "uppity".

I interpreted your comment as irony, as am pretty confident it was meant
like it. If you really honetsly say you meant it as-is, I will, however,
believe you and admit my reaction to your last comment was wrong.

> Well I have responded to the only cogent concern that I have seen you
> express, which is that you think that these changes contradict the
> documentation. The documentation does not support you in this regard.

Well, it clearly does. It's your interpretation that doesn't support mine,
and now the code.

> Is there some other matter I should respond to?

You don't have to respond at all, but if you do so, I personally expect
you to use more than empty claims, and actually use evidence (which you
have done, for the most part).

> Oh, I suppose I forgot. Afaik, you don't even have to fix this, just
> take the patches I did to the JSON::PP tests and apply them.
> See:

But the bug is in the perl core. Nothing I apply can fix that, becasue I
can't apply fixes to the perl core.

Besides, it makes no sense to me to say "you don't have to fix that, just
apply the fix". What you mean is "you don't have to make the work to invent a
fix, it has already been done".

The problem is that I think the code is fine, and even if not (by sheer power
of force by you applying a buggy patch and claiming it's a good and
reasonable and corretc patch and my code is broken), I think it *should* be
correct, because taking away a useful feature that doesn't (in itself) gain
you anything

All I urge you, again, is to consider doing the "hardening" either better,
in a compatible way (which I think is possible), or consider the trade-off
between a rather ineffective patch (that doesn't shield against similar
DOS attacks) and breaking code that ought to work.

I don't have any strong hope for that, and think you will opt for the
ignore-and-break-it route, which is a valid way to proceed from your side
of things. I did feel the moral obligation to tell you why it is wrong,
and if you still do it, you at least do it in an informed way.

                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_    
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /
      -=====/_/_//_/\_,_/ /_/\_\

Thread Next Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at | Group listing | About