[NTG-context] Unicode stuff (was: Re: Specifying BibTeX engine)
Philipp Reichmuth
reichmuth at web.de
Sat Nov 4 17:19:36 CET 2006
gnwiii at gmail.com schrieb:
> to replace "bibtex" with "ctxbibtex", which is a shell script I can edit
> to use bibtex8, etc. with appropriate arguments (e.g., for very
> large .bib files) as well as encoding tricks. A dirty hack is to put an
>
> \installprogram{ctxbibtex \jobname}}}
>
> line in your file (after the other setup). The job runs bibtex and
> then ctxbibtex, so you end up with the results of whatever is in your
> script.
OK, that's not pretty, but it would do the job, assuming that regular
7-bit BibTeX doesn't bail out on Unicode files (which it sometimes does
here).
>> (Incidentally, I've been using a Python script to convert BibTeX files
>> between Unicode and {\=a}-style accent notation and am currently
>> thinking of putting in ConTeXt {\adiaeresis}-style accents as well;
>> would this be of interest to anyone?)
>
> I use GNU recode for this, but not with ConTeXt, where
> "\enableregime[utf]" has been working with my utf8 bibliography, so
> I haven't needed ConTeXt {\adiaeresis}-style accents.
Yes, the main reason I've been bothering with a custom script is that my
bibliography program (Citavi) produces buggy export files with Unicode
characters in BibTeX record keys, so I have to distinguish between
Unicode in keys and in data and treat them differently. Also not all of
the characters I need are covered by ConTeXt-style accents. (I could
add them, of course, since I have the TeX code for them anyway; Hans, is
there a canonical way to do this?) Since I need to do some other fixes
anyway, I've put it in an extra script.
I've been starting to reuse some of this work in a script to do active
character assignment for XeTeX depending on what glyphs are present in
an OpenType font, so that those characters for which the font doesn't
have a glyph are generated by ConTeXt. Basically I want to produce
something like this:
\ifnum\XeTeXcharglyph"010D=0
\catcode`č=\active \def č{\ccaron}
\else
\catcode`č=\letter
\fi % ConTeXt knows this letter -> better hyphenation
\ifnum\XeTeXcharglyph"1E0D=0
\catcode`ḍ=\active \def ḍ{\b{d}}
\else
\catcode`ḍ=\letter
\fi % ConTeXt doesn't know this letter
(with \other, respectively, for non-letters). Being somewhat of a
novice to TeX programming, I'm not sure if this will work, though, and
I'm also not sure if it's better to generate static scripts that do this
for every font (so the resulting TeX file is a font-specific big list of
\catcode`$CHARACTERs) or to do this dynamically on every font change,
maybe limited to selectable Unicode ranges (which is more general but
also a lot slower).
> I'd prefer to see a context encoding added to GNU recode for the
> benefit of future archeologists trying to decipher ancient documents.
That would be better I guess, but isn't ConTeXt encoding a moving target
in that characters can still get added? Or is the list fixed to AGL
glyph names and nothing else?
Philipp
More information about the ntg-context
mailing list