> I dont know about that. We make such assumptions in the regex engine, > and possibly in terms of the expected encoding of source files > without use locale, but i dont think we actually do mandate that it is > latin-1 generally. And Im unconvinced that the suggestion made by Jan > is as problematic as either your or Marc have said. If we used Win32 > API calls to convert/acccess system data as widechar (UTF16) and then > converted the result to utf8 then we should be in the clear. Except that all programs expecting commandline arguments, or filenames stored in files, will not work and there will be no way to make it work? see below. I don't want to become personal, but did you work with unicode a while in perl? introducing silent encoding changes as proposed by jan is deadly, as there is a total lack of documentatioon on when it happens. Changing user data is *evil*. > And I dont believe that the problem is in reading data from a *file*. > That type of issue is a) not win32 specific b) extremely common and c) > well soved by the proper application of Encode and friends. So give us a working example that works on windows, and another that works on unix (preferably the same, actually). Assume you have a file that stores a filename in the current locale (unix) or in ansi encoding (windows) and want to open that file. Just try it. > Currently as far as I know there is no way using perl to use the Win32 > widechar apis to create unicode filenames and directories. And if i > understood Jan right then his suggestion would resolve that problem. And utterly break perl even more than it currently is. -- The choice of a Deliantra, the free code+content MORPG -----==- _GNU_ http://www.deliantra.net ----==-- _ generation ---==---(_)__ __ ____ __ Marc Lehmann --==---/ / _ \/ // /\ \/ / pcg@goof.com -=====/_/_//_/\_,_/ /_/\_\Thread Previous | Thread Next