develooper Front page | perl.perl5.porters | Postings from February 2001

Re: IV preservation (was Re: [PATCH 5.7.0] compiling on OS/2)

From:
Ilya Zakharevich
Date:
February 16, 2001 12:35
Subject:
Re: IV preservation (was Re: [PATCH 5.7.0] compiling on OS/2)
Message ID:
20010216153517.C20979@math.ohio-state.edu
On Fri, Feb 16, 2001 at 08:52:52AM +0000, Alan Burlison wrote:
> Ilya Zakharevich wrote:
> 
> > > 64 bit integers are becoming more desirable, as file sizes increase -
> > 
> > I have seen no need yet to support file sizes above 10**15 bytes.
> 
> Don't assume that the only reason for 64 bit integers is large files. 
> The main reason 5.6.x is going into Solaris is because I need the 64-bit
> integer support.  Solaris hrtime_t is a 64 bit int, and lots of the data
> I want to manipulate is of this type.  It doesn't take very long for a
> 32-bit counter to overflow if the thing it is counting is nanoseconds.

How do you get 32-bit counters in Perl?  If you are discussing
something other than Perl which needs to be linked with Perl, then I
understand why *this* code may need to be 64-bit.  But in Perl you get
53-bit counters by default, so going up to 64-bit is not that
critical.  Especially since because of extreme slowness of Perl you do
not have a chance to touch your counter more than 10^14 times...

Measuring time is a very elusive topic, it is very easy to make
errors.  E.g., if you count not from 0, but count #nsec from the
Epoch, then of couse you got over 2^53 long ago...  But in this
context 64-bit number have much larger chance to be unusable too: you
cannot even multiply your count by 20 without overflow.

IMO, you would be much safer using the long doubles for time
measurement in this situation.  Which puts us back in the
no-patch-needed land.

Ilya



nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About