On Thu, Feb 15, 2001 at 09:47:41PM -0500, Ilya Zakharevich wrote: > On Thu, Feb 15, 2001 at 10:54:57PM +0000, Nicholas Clark wrote: > > > The *only* case when this adds some new functionality is on the > > > platforms with 64-bit floats and 64-bit integers. These platforms are a tiny > > > minority, and have a possiblity to use long floats intead. So the > > > patch is not *required*. > > > > True, it is not required, even on most 64 bit platforms. > > But without it perl is assuming that floating point can fully preserve > > integers. So to maintain numeric sanity while using 64 bit integers one > > has to use long doubles. > > How many platforms have a full set of trigonometric functions at long > > double precision? ANSI C89 doesn't supply them. It doesn't even supply > > square root at anything other than double. > > How many functions of what Perl uses are in ANSI C89? I would think > around 20%... True, but I would expect that most of the other functions perl uses are "standard" on unix (and standardised by one or more of the official Unix specifications) and emulated reasonably on most other platforms I have no trouble compile perl on this FreeBSD machine; it has all the functions you might expect of a "modern" Unix, but there is no square root function at long double precision. (or sin) I admit that I am assuming that most systems perl builds happily on lack long double transcendental functions. I'm not sure whether this is a valid assumption. If it is valid, is it a good idea to encourage people to build perl with an NV precision which is not fully supported? > > 64 bit integers are becoming more desirable, as file sizes increase - > > I have seen no need yet to support file sizes above 10**15 bytes. The idea of doing file positions and offsets in floating point scares me. [probably as much as this "madness" scares you] > > I believe that you misrepresent what this patch wants to do. > > Or I misunderstand what you say. > > > > It wants to move *integer* operations from FPU to "software" (read integer > > hardware). All other operations remain in hardware. > > All the logic to deside *which* branch it wants to take is in software. The revised version for add and subtract now performs the operation as unsigned, and tracks the sign separately, rather than having the sea of branches, which were confusing. [and therefore prone to error] They are now much simpler. The comparison operators do still have tree logic to determine what to compare, but this logic does not involve the two or three attempts to avoid overflow that the first code for add and subtract had. It is not trivial, but I would not consider it intricate or complicated. (in that each pp function is understandable in isolation, and the flow through each pp function is not convoluted) > > I believe that this means that the only floating point operation involved > > is determining if a floating point value is actually exactly an integer. > > Why is this a problem? > > Do you know what IEEE says about such checks? Is -0 an integer? Is > -Inf? Is NaN? Etc etc etc. Checking that a float is an integer is > a non-trivial and a very expensive operation. [I have no idea how to > answer these questions "correctly".] I admit that I do not know what IEEE says. assuming that (-Inf < 2**64) is true, then -Inf will not be an integer [according to the casting behaviour probed for by Configure] because it is out of range. The intent was to (and the implementation should) 1: the first time the conversion was requested, cache the result of the conversion, or the closest approximating integer, and either way flag that the IV slot held a valid value; "private" if the conversion was not accurate, "public" if it was.2: 2: convert direct from PV to IV whenever looks_like_number determines that a string is an integer, which avoids an NV-IV conversion completely This will result in multiple calls into sv_2iv for a value flagged as "private" to get the IV, (but not multiple conversion) but no calls are needed into sv_2iv to read the flags to realise that the SV holds an NV value that is not an integer after this first (cached) conversion > My reading of sane numbers is that for most operations you know *in > advance* to which form it will convert its arguments. The number of > opcodes which inspect the type of the operand, then behave accordingly > (here I mean ++, not & - no *drastic* changes in operation) is > minimal. Being a few, you can memorize them, then *predict* what your > program is doing. > > What Perl taught us is that DWIM is *not always* better than sliced > bread. By a significant increase in the number of operations which > inspect their arguments, the patch makes prediction harder, not > easier. I had not thought of this. Would it make sense to keep the 5.6 numeric operator code base, and allow a program to switch to them by means of a "use float'" pragma that works in much the same way as the current "use integer;" ? > > >From the documentation of qu// in perlop, I fail to see why it's anywhere > > near as horrible as changes to the numeric operators. I would have thought > > that qu// was less bad, not the other way round. > > I found no documentation in perlop! > > Like the qq manpage but generates Unicode for > characters whose code points are greater than 128, > or 0x80. That was all the documentation I found. > "Generates Unicode"? What does it mean? "Generates bytes"? How do I > distinguish "generated bytes" from "generated Unicode"? Ah. These thoughts did not go through my head when I read perlop. If they had, I might not have written quite what I did. I suspect I'm also assuming I know what it's up to as a result of reading various threads on p5p when it was first mooted. Nicholas Clark