On Tue, Nov 14, 2000 at 10:30:06AM -0600, Jarkko Hietaniemi wrote: > > OK. What does a scalar with IV and NV both OK mean? > > I *think* it *should* mean that the cached IV and NV are both valid > *and* that there is no loss of precision: (IV)nv == iv && (NV)iv == nv. > Obviously, this doesn't seem to be the case. OK. How do I detect that 18446744073709551616.0 isn't 18446744073709551615 ? Round here: #include <stdio.h> int main (void) { unsigned long long max=0xFFFFFFFFFFFFFFFF; double imprecision; imprecision = (double) max; printf ("imprecision=%.1f\tmax=%llu\n", imprecision, max); if (imprecision == ((double) max)) puts ("imprecision == (double) max"); if (((unsigned long long) imprecision) == max) puts ("(unsigned long long) imprecision == max"); return 0; } Gives: imprecision=18446744073709551616.0 max=18446744073709551615 imprecision == (double) max (unsigned long long) imprecision == max How do I detect that the UV and the NV aren't equal, when all the casting conspires to keep it from me? xor the UV low bits with something and see if that also is preserved? [I *think* that this only happens on UV_MAX, and that we need can detect if the platform needs this double test during Configure] Nicholas ClarkThread Previous | Thread Next