Kent Fredric wrote: >I've seen *reams* of test suites that have this problem simply because >of the fun with needing to deal with marshalling your strings through >Test::More. Do you mean things like the got/expected output from a failed is() test? It periodically annoys me that this outputs control characters found in the data. In similar vein, that it tries to output non-Latin-1 characters direct to a stream that it doesn't know can handle it seems like a bug. The fact that, as a result, the encoding of the diagnostics is inconsistent within a single test script run certainly is a bug. This is a bug that has been allowed to arise and persist precisely because of this broken core behaviour. The testing infrastructure should really be using something like Data::Dumper to represent unprintable and non-ASCII strings. Even in a fully Unicode-capable environment, sticking to ASCII has value for testing output. It has value to the immediate user in making "\x{391}" distinct from "\x{41}" and "\x{e9}" distinct from "\x{65}\x{301}". And it has value in making the test output robust for communicating it via the myriad mechanisms we have that can't be relied on to get the encoding right at every stage. -zeframThread Previous | Thread Next