[NTG-pdftex] a dream about options

Hans Hagen pragma at wxs.nl
Mon Nov 14 10:34:25 CET 2005


Pawel Jackowski wrote:

> Hi Hans!
>
> thanks for response.
>
>>> Hello,
>>>
>>> I suppose the issue was discussed before but couldn't find that is 
>>> archives.
>>>
>>> Making high level macros in (pure) TeX encounters problem of 
>>> optionality[...]
>>
>
>> in the previosu century taco made some extensiong dealing with 
>> key/val parsing (and other kinds of parsing); at that time we did 
>> quite some testing with speed issues, and the price to pay was higher 
>> that the gain; keep in mind that in tex catcodes and expansion always 
>> kind of interfere in such things; 
>
>
> Indeed, to keep things consistent, parameter text should always be 
> expanded before being swallowed by \macro... Anyway, pdfTeX handles 
> that somehow, supporting both expansion and catcoding.

parsing keys as in \hrule ... is kind of special anyway: one can 
overload (repeat specs) but not all of them are treated the same; fuzzy 
optimizations and such; so there is not much reusable code there

>
>> ok, if you limit key/val parsing to 'strings' only (i.e. no 
>> expansion, no catcode support) then it can be faster, but at the same 
>> time it also becomes useless for everyday usage; as with more tex 
>> things, because tex is rather optimized, hard coded alternatives 
>> (extensions to the language) not always pay off; also consider 
>> implementation issues like namespacing, scope of keys, etc
>
>
> ...and parameters handling rules in such an extention would probably 
> be faaaar away from TeX. But if tries has been made in previous 
> century, meybe it's high time to repeat tests? In the previous century 
> features such as decompressing/recompressing PNG in runtime didn't pay 
> off either. If you don't consider it doable, I will reopen the request 
> in the next century. -)
>
sure, but there 'new methods' pop up and or constraints disappear (mem); 
tex is really optimized (i did lots of timing and many low level context 
commands and k/v handling are optimized to the max); on the other hand, 
the bigger memory makes it possible to be a bit more spending that in 
the past, i.e. write less fuzzy code (much tex code is fuzzy because of 
speed and mem issues); btw, one reason for context being 'slow' was that 
it had all this key value etc stuff and used quite some hash and mem; 
recently there was a report of on the tex live list who mentioned latex 
being much slower than in the past and tacos answer to that was that 
it's not related to slower pdftex or so since context had not become 
slower: it was just that bigger (la)tex's take more time to initialize, 
etc etc and all packages become bigger;  if you want to speed up macro 
packages think of what happens in there: even in a parameter driven 
package like context, the amount of expanded code (tracingall) with 
regards to k/v handling is small compared to messing around with boxes 
(esp when \copy is used) and saving/restoring as well as redundant 
passing around arguments (the #1 things)

another think to keep in mind: when discussing things like 
reimplementing tex, one can be tricked into thinking that there should 
be fast methods for char handling, hyphenation, etc etc (i.e. influences 
design of for instance classes in oo like aproaches); however, in more 
modern macro package features, tex spends much time in manipulating 
boxes (lists) in the otr and it could be that modern programming 
languages provide features that would make such critical operations 
faster; so, the gain is in other areas (take pdftex: pdf inclusion is 
incredibly fast, simply because that piece is written in a different 
language/way); compare it to scripting languages: they excel in handling 
text simply because they're made with that in mind.

Hans



More information about the ntg-pdftex mailing list