On 11/28/2016 02:18 AM, Dave Mitchell wrote: > I've also been looking into whether the lexer's token recogniser > (keywords.c) could be done in a different way: the current approach, as > used for the last 10 years, generates a custom code-driven trie, which uses > very few CPU instructions; but these days branch prediction is more important, > and that code style kills branch prediction. I suspect that a data-driven > trie will perform better now. I would be surprised if speeding this up would have much effect on programs, except for possibly string evals in an inner loop. Am I wrong?Thread Previous | Thread Next