[NTG-pdftex] processing speed

Taco Hoekwater taco at elvenkind.com
Sat May 30 13:57:35 CEST 2009


Hartmut Henkel wrote:
> On Sat, 30 May 2009, Taco Hoekwater wrote:
> 
>> Hartmut Henkel wrote:
>>> On Sat, 30 May 2009, The Thanh Han wrote:
>>>
>>>> so indeed pdftex seems to spend a lot of time allocating memory. The
>>>> number of zgetnode() calls is the same (708325535 in both cases),
>>>> however pdftex calls took more time...
>>> and it looks like 1.40.6 is ok, no slowdown. the main difference may be
>>> that 1.40.6 has on synctex yet.
>> I am running a luatex test, and it does not seem to slow down
>> (it is not even at 20% yet though, this processor is fairly old)
> 
> looks like you are using another get_node() method. In pdftex it seems
> that it rather often goes to
> 
>     @<Grow more variable-size memory and |goto restart|@>=
> 
> There is not much time-consuming code in this chunk, but the "got
> restart" may give a lengthy rovering each time. At least when i increase
> t:=lo_mem_max+100000 so to grow mem in larger chunks, speed slowdown is
> much less tremendous. By then, why does it need so much more memory?

Perhaps the node merge is failing too often due to fragmentation?

I've quit luatex now (I got bored) and at exit it reported the
following node usages:

   ? x
   node memory in use: 601252 words out of 2495936
   rapidly available: 1:8, 2:8, 3:551575, 4:326, 5:726, 6:4, 7:1621,
      9:14, 10:22 nodes
    current usage: 92 hlist, 1 vlist, 1 rule, 1891 glue, 1000 kern,
      6 penalty, 7201 glyph, 293 glue_spec, 1 temp, 2 local_par,
      1 dir nodes
   Output written on testplain.dvi (457908 pages, 590050796 bytes).
   Transcript written on testplain.log.

of course, as I made it quit mid-page, the "in use" and  "current
usage" reports are not too valuable, but notice that the "rapidly
available" report  says that there are now 551575 nodes of size 3 
available.

That looks ridiculous, because nothing ever asks for that many 3-word
nodes, but it helps to know that whenever there is a small bit of
otherwise useless memory  discovered by luatex's get_node(), this bit
of memory is automatically transfered to the "rapidly available" list.

That probably accounts for almost all of the 550K 3-word nodes. I
suspect something similar happens in pdftex, and that this problem has
somehow become more apparent with the addition of synctex (synctex makes
many nodes larger, so the chance of too small chunks becomes higher).
Does that make sense?

Best wishes,
Taco

Best wishes,
Taco











More information about the ntg-pdftex mailing list