develooper Front page | perl.perl5.porters | Postings from December 2008

PATCH: improve the efficiency of converting a sv to utf8

Thread Next
karl williamson
December 28, 2008 13:44
PATCH: improve the efficiency of converting a sv to utf8
Message ID:
With the recent talk about speeding up utf8 counting, I decided it would 
be a good time to finish up a patch I'd been working on, mostly to 
sv_utf8_upgrade(), to improve the efficiency.

Consider what currently happens when the tokenizer is scanning a string. 
  It looks through it byte-by-byte until it finds a character that 
forces it to decide to go to utf8.  It then calls sv_utf8_upgrade() with 
the portion of the string scanned so far.

sv_utf8_upgrade() starts over from the beginning, and scans the string
byte-by-byte until it finds a character that varies between non-utf8 and
utf8.  It then calls bytes_to_utf8().

bytes_to_utf8() allocates a new string that can handle the worst case
expansion, 2n+1, of the entire string, and starts over from the
beginning, and scans the input string byte-by-byte copying and 
converting each character to the output string as it goes.

It doesn't return the size of the new string, so sv_utf8_upgrade() 
assumes it is only as big as what actually got converted, throwing away 
knowledge of any spare.

It then returns to the tokenizer, which immediately does a grow to get
space for the unparsed input.  This is likely to cause a new string to
be allocated and copied from the one we had just created, even if that
string in actuality had enough space in it.

Thus, the invariant head portion of the string is scanned 3 times, and
probably 2 strings will be allocated and copied.

My solution to cutting this down is to do several things.

First, I added an extra flag for sv_utf8_upgrade that says don't bother 
to check if the string needs to be converted to utf8, just assume it 
does.  This eliminates one of the passes.

I also added a new parameter to sv_utf8_upgrade that says when you
return, I want this much unused space in the string.  That eliminates
the extra grow.

This was all done by renaming the current work-horse function from 
sv_utf8_upgrade_flags to be sv_utf8_upgrade_flags_grow() and making the 
current function name be a macro which calls the revised one with a 0 
grow parameter.

I also improved the internal efficiency of sv_utf8_upgrade so that when 
it does scan the string, it doesn't call bytes_to_utf8, but does the 
conversion itself, using a fast memory copy instead of the byte-oriented 
one for the invariant header, and it uses that header to get a better 
estimate of the needed size of the new string, and it doesn't throw away 
the knowledge of the allocated size.

And, if it is clear without scanning the whole string that the 
conversion will fit in the already allocated string, it just uses that 
instead of allocating and copying a new one, using the algorithm I 
copied from the tokenizer.  (In this case it does have to finish 
scanning the whole string to get the correct size.)  The comments have 

It still is byte-oriented.  Vectorization et. al. could yield 
performance improvements.  One idea for that is in the comments.

The patch also includes a new synonym I created which is a more accurate 
name than NATIVE_TO_ASCII.

And, there are two patches, the second to be applied after the first, 
which is what git did.  I realized after I had done a commit that I 
shouldn't have deleted some comments, and I didn't know how to back out, 
so what I tried ended up being a second patch.

Thread Next Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at | Group listing | About