[NTG-context] lmtx

Hans Hagen j.hagen at xs4all.nl
Fri Jun 19 12:11:03 CEST 2020


Hi,

Here's an update about what Wolfgang and I are doing in the lmtx code 
base i.e. where it differs from the mkiv code. Much has to do with 
cleaner code, less memory usage and less tracing clutter. To some extend 
it might be more efficient in terms of runtime, but in practice there is 
not much (measurable) to gain at the macro level wrt speed. Most 
bottlenecks are at the Lua end anyway.

(1) macro arguments:

In tex a macro argument is defined with numbers, so

   \def\foo#1#2{... #2 ... #2 ...}

in luametatex we can have some variants

   #0 : the argument is picked up but not stored
   #- : the argument is ignored and not even counted
   #+ : the argument is picked up but no outer braces are removed

There is some explanation in the evenmore document, like:

   \def\foo    [#1]{\detokenize{#1}}
   \def\ofo    [#0]{\detokenize{#1}}
   \def\oof    [#+]{\detokenize{#1}}
   \def\fof[#1#-#2]{\detokenize{#1#2}}
   \def\fff[#1#0#3]{\detokenize{#1#3}}

   \meaning\foo\ : <\foo[{123}]> \crlf
   \meaning\ofo\ : <\ofo[{123}]> \crlf
   \meaning\oof\ : <\oof[{123}]> \crlf
   \meaning\fof\ : <\fof[123]>   \crlf
   \meaning\fff\ : <\fof[123]>   \crlf

I mention this because you may encounter something other than #1 .. #9 
in error reports or tracing.

(2) interception of unmatched delimiters

If you define macros like

   \def\foo[#1=#2]{...}

the brackets and equal had better be there. However there is some 
trickery in the macro parsing that can be used to intercept cases where 
they are missing: optional tokens as well as forcing end of scanning (so 
if you see \ignorearguments and \ifarguments in the code, that is what 
we're talking about: premature quitting and checking why that happened 
and acting upon it: less hackery is needed but then of course also less 
possibility to 'show off how clever one is wrt writing obscure macros' 
... so be it.

Both (1) and (2) are relatively lightweight extensions that are downward 
compatible as well as have no impact on performance. We don't expect 
them to be used at the user level (where one seldom defines macros 
anyway) but we apply them in some low level macros (and might apply them 
more). One can measure better performance but only when applied millions 
of times which is seldom the case in context (we're often more talking 
thousands there than millions of expansions).

(3) There is some more 'matching token sequences' code that is being 
applied stepwise but again this is normally not seen at the user level. 
It sometimes permits cleaner fully expandable solutions.

(4) There are some additional if tests but these are already around for 
quite a while and probably went unnoticed anyway.

(5) Nested conditions can be flattened by using \orelse and that's what 
happens now in quite some places in the lmtx code (it's already there 
for a quite a while and kind of well tested). Again this leads to less 
tracing, can in priniciple be more efficient, but also demands less 
multiple expand after situations. It also just looks nicer.

(6) Upcoming code can use some more (simplified) grouping magic but that 
is something we need to test more (which is what we are doing now).

Just that you know when you get some error and see unfamiliar primitives,

Hans

-----------------------------------------------------------------
                                           Hans Hagen | PRAGMA ADE
               Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
        tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
-----------------------------------------------------------------


More information about the ntg-context mailing list