Hans Hagen said this at Mon, 25 Apr 2005 20:48:13 +0200:
In other news, would a TeX'n'Unicode encoding interest anyone?
depends on what you mean;
one thing i've been discussing with jacko is that we need an as-many-chars-as-possible encoding; for instance, symbols like \copyright are used so seldon that they can be symbols and its slot can be used for something more useuful
Yeah, I remember you mentioning it earlier. At the time, I was headed in the opposite direction, looking for sparser encodings so LCDF typetools (for .otf's) could insert alternates or ligatures into the empty slots. That's what the basic idea is behind the TeX'n'Unicode: like texnansx, get rid of the duplicates in the encoding. Unlike texnansx, it keeps the glyphs to make it compatible with Unicode's 00-vector. Anyway, I could take a look at the character-dense encoding, but it'll help to know what your priorities are: 1) Do you want combining accents kept in? 2) Priority languages after western european? Central european, I'd guess, but in what order? 3) The concept sounds sort of like EC encoding (in its relationship with TS1), but with less of the backwards-compatibility "cruft". Should I take that as a starting point, or work with texnansi? -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Adam T. Lindsay, Computing Dept. atl@comp.lancs.ac.uk Lancaster University, InfoLab21 +44(0)1524/510.514 Lancaster, LA1 4WA, UK Fax:+44(0)1524/510.492 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-