On Sat, Feb 20, 2010 at 10:51, Hans Hagen wrote:
On 20-2-2010 5:06, Mojca Miklavec wrote:
I don't know how Hans' metric files look like, but: - texfont --ve=yandy ... doesn't do anything here - texfont --ve=bh ... creates a bunch of files, however *not* the T1 files which are of crucial importance to me; T1 (=ec) is only supported via virtual fonts. (And even then the character looks just about terrible, but still better than not having it at all.)
afaik originally yand only shipped texnansi metrics
\definetypescriptprefix [e:ec] [8t] \definetypescriptprefix [e:texnansi] [8y] \definetypescriptprefix [e:8r] [8r] Quite possible. Lucida doesn't provide all the ec glyphs anyway (č is not present), but the virtual font on CTAN apparently fakes it successfully by placing caron over c and taking two glyphs (successfully only in technical meaning, so that it works in the first place; typographically it's obvious that the one who did the virtual font has never used that glyph).
however, in context we have this texnansi-* naming scheme and for a while (as i used lucida often) i shipped the texnansi-* metrics
The metrics are there already, it's only that their naming scheme is weird. But unless someone really depends on raw tfm names, the following should work OK: \starttypescript [mono] [lucida] [ec,texnansi,8r] \definefontsynonym [\typescriptthree-lbtr] [hlcrt\typescriptprefix{e:\typescriptthree}] [encoding=\typescriptthree] % LucidaTypewriter \definefontsynonym [\typescriptthree-lbtb] [hlcbt\typescriptprefix{e:\typescriptthree}] [encoding=\typescriptthree] % LucidaTypewriterBold \definefontsynonym [\typescriptthree-lbto] [hlcrot\typescriptprefix{e:\typescriptthree}] [encoding=\typescriptthree] % LucidaTypewriterOblique \definefontsynonym [\typescriptthree-lbtbo] [hlcbot\typescriptprefix{e:\typescriptthree}] [encoding=\typescriptthree] % LucidaTypewriterBoldOblique \definefontsynonym [LucidaTypewriter] [\typescriptthree-lbtr] [encoding=\typescriptthree] \definefontsynonym [LucidaTypewriterBold] [\typescriptthree-lbtb] [encoding=\typescriptthree] \definefontsynonym [LucidaTypewriterOblique] [\typescriptthree-lbto] [encoding=\typescriptthree] \definefontsynonym [LucidaTypewriterBoldOblique] [\typescriptthree-lbtbo] [encoding=\typescriptthree] \loadmapfile[lucida.map] \stoptypescript If one doesn't need the texnansi-lbr synonyms, it can be done more efficiently in a single step. Alternatively we can of course ship the metrics, but assume some LaTeX user with an already working LaTeX installation of Lucida - wouldn't it be great if it worked in ConTeX out-of-the-box?
now, when yandy went out of business and tug took over, things changed and as the original tfm (math) metrics were no longer shipped we ended up in some deadlock: context supported the original fonts (present on machines of users) while tug didn't ship those 8 tfm files needed for math
But now the tfm files for math are shipped, or am I missing something?
Do you mean some other math fonts? The package on CTAN has the
following tfm files (and maybe some extra ones in vf that I didn't
manage to check completely yet):
hlcda LucidaNewMath-Arrows-Demi so, at that point i simply gave up on lucida (i had my own texnansi-* +
original math tfm files) as changing context would break existing lucida
usage interesting is that the lucida metrics (afaik) are not shipped with tex live
so we cannot create a robust solution unless we ship with the minimale: What does robust mean? One needs to buy the font anyway, so it won't
work out of the box, but it would be nice if it worked out of the box
with the files that TUG does ship. Metrics in TeX live don't really
help without pfb files and if one installs pfb files, one may easily
copy everything else along. Of course we can add the metrics to
minimals (optional install), but I would prefer them to match the ones
from TUG (currently they are also available on CTAN). for mkiv (luatex): - afm files (for mkiv)
- map file (can be small one)
- only some 8 math tfm files for mkiv for mkii: - map file
- tfm files (dunno which ones)
- vf files (dunno which ones) I can double-check which ones are not needed, but we could drop at
least the 8r encoding. for mkiv i already adapted the typescripts (in beta), for mkii we need
different mappings I'll take a look. I have noticed that math-vfu does use the metric
names from the latest set of fonts.
Hoewere one needs to keep in mind that it's not 100% compatible
encoding (the "rm" doesn't 100% correspond to the same glyphs in LM,
but it's rather a roman variant of "mi", so at least the brackets need
to be taken from somewhere else). Also, there are many extra glyphs
all over the place that should better be mapped to unicode if one
wants to use them. I have fixed the typescripts, so that one doesn't need to generate any
additional files apart from those present on CTAN (and we can also add
them to minimals, but it might be nice to cooperate with Hans first to
prevent any name clashes). I'll send the typescripts once I figure out
some problems, but: ok, alternatively we could ship the texnansi-* and ec-* variants but who
cares For me it is the less files - the less problems. If some other files
are already considered "standard" and work out of the box, it would be
nice to use those. (Unless someone convinces Karl to ship a few extra
metrics, but you already said that he declined the idea.) 1.) Hans, why does
\definefontsynonym [LucidaBright] [file:hlhr.pfb]
fail with the message below? (I can send a complete example off-list.)
How does one use pfb fonts then? Anyway, \definefontsynonym
[LucidaBright] [name:LucidaBright] works fine, so that's ok for now.
Using the accents in XeTeX would require extra tricks anyway, so
concentrating on pdftex and luatex seems reasonable. i have no clue ... but best test with the latest beta also, i did a fix in math-vfu (extra nil check) as some shapes seem to be
missing even if it works, luatex can quit whem embedding the file (buglet) but that
has been fixed by taco yesterday I will check that (one needs to check the vectors anyway). 2.) When reading typescripts for mkii with mkiv ("ec" encoding that is
based on virtual fonts), I don't get any accent at all, not even š and
ž that are part of texnansi encoding and are present in font. Why is
that? nu clue ... maybe because yandy only bothered about texnansi so we might as
well stick to that But š and ž *are* part of font and part of texnansi encoding. 3.) When creating a devoted mkiv typescript, č is missing (and so are
ćđ, but I can live without the two of them as long as Nino is not
nearby :), however not being able to use č is a no-go for me. again, maybe the font is not complete i'm a little surprised as i'd suppose those virtual fonts to be ok we can still consider to use the texnansi-* and ec-* variants (texfont
generated) The virtual font *is* ok. It is MKIV that doesn't care about that virtual font.
In short:
- texnansi -> is tfm; all the glyphs are present in font
- ec -> is only vf; some glyphs are missing in font, but are composed in vf
Is there any way to get those composed glyphs out in mkiv? I'll get to math later. At the moment I had to figure out how the
files are organized and how to use them without having to depend on
texfont. lucida was always different but afaik tug now ships them in default tex math
encoding so the math-lbr vector is useless now - almost default -
It's quite ok for the first approximation, but still needs some fixes.
If nothing else it offers way more glyphs than CM/LM.
Mojca