Adam Lindsay wrote:
Yeah, I remember you mentioning it earlier. At the time, I was headed in the opposite direction, looking for sparser encodings so LCDF typetools (for .otf's) could insert alternates or ligatures into the empty slots. That's what the basic idea is behind the TeX'n'Unicode: like texnansx, get rid of the duplicates in the encoding. Unlike texnansx, it keeps the glyphs to make it compatible with Unicode's 00-vector.
indeed
Anyway, I could take a look at the character-dense encoding, but it'll help to know what your priorities are: 1) Do you want combining accents kept in?
no
2) Priority languages after western european? Central european, I'd guess, but in what order?
combined, take a look at teh qx encoding, it already goes in that direction
3) The concept sounds sort of like EC encoding (in its relationship with TS1), but with less of the backwards-compatibility "cruft". Should I take that as a starting point, or work with texnansi?
qx Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------