On 24 Jul 2015, at 15:34, Hans Hagen
wrote:
I have experimented with a theorem proof assistant that admitted parallel ASCII and Unicode symbol names, but it turns out to be complicated. Think of C/C++ trigraphs, a chore to implement, only to be removed in the latest standards.
So I think one should only focus on UTF-8, and add TeX ASCII “\” commands as a complement.
One problem with this approach is the lack of Unicode input methods. But that may coming.
For example, instead having “:=“ in the input file and let Lua translate it, one can merely type it and let the text editor translate it ≔ COLON EQUALS U+2254.
that is ok for some input sequences (this kind of input translation happens for accented characters and some math like negated symbols) but replacing <= in the input is bad as it is only meaningful in math and not all input is math (and unicode lacks script/language tagging); keep in mind that 'verbatim' in tex really means verbatim and input translation contradicts that
I have only the part that is used to produce math character output here, and it would be better to have it in the key map. But somehow, it would be possible to produce those Unicode characters. Right, there is a lot copy and paste, but it is slow.
It will save a lot of programming time, at least on the ConTEXt project. :-)
not really as that code is already in place for years; in this case it mainly boils down to adding some extra entries in the character database
(and this code is simple compared to other code so not much to save here)
You can add it if you think it is no problem. Proper Unicode characters in the input help the readability, so putting them at high priority seems good.