Hi all (Idris especially), When you read the Omega 1.12 documentation, section 12 is all about Input and Output text file encodings and how Omega deals with those in a much more elaborate way than traditional TeX. In particular, there are "modes" and "translations", along with a number of (mixed case, yuk) primitives. I would like very much to have a smalll but complete set of test files to make sure that my merge of the Aleph code into LuaTeX is correct. Because the I/O internals are quite different, I have a hard time making sure LuaTeX behaves just like Aleph. Idris, could you (or perhaps another Aleph user) create such a set of test files? Something like a mini-trip test for just this bit of code. Thanks in advance, Taco
Hi Taco,
On Wed, 16 Aug 2006 08:48:34 -0600, Taco Hoekwater
Hi all (Idris especially),
When you read the Omega 1.12 documentation, section 12 is all about Input and Output text file encodings and how Omega deals with those in a much more elaborate way than traditional TeX. In particular, there are "modes" and "translations", along with a number of (mixed case, yuk) primitives.
I would like very much to have a smalll but complete set of test files to make sure that my merge of the Aleph code into LuaTeX is correct. Because the I/O internals are quite different, I have a hard time making sure LuaTeX behaves just like Aleph.
I'll put something together using the gamma module (see m-gamma.tex and type-omg.tex) sometime today. See also the off-list mail I sent you.
Idris, could you (or perhaps another Aleph user) create such a set of test files? Something like a mini-trip test for just this bit of code.
I just checked out that section: as far as I can remember I have never used any of these primitives at all in any of my work. Roger Wright apparently uses some of this: http://pws.prserv.net/Roger_Wright/ROGER.HTM http://pws.prserv.net/Roger_Wright/utf8test.htm As well as Vincent Zoonekynd http://zoonek.free.fr/LaTeX/ http://zoonek.free.fr/LaTeX/Omega-Japanese/doc.html This is interesting; it allows one to globally specify the input encoding prior/independent of the ocp-list. I've always just done that inside the ocp list itself in my own work (but then again I use multiple inputs generally). Perhaps this mechanism holds a clue to the problem of abstracting ocp lists so that the input encodings, internal translation processing, and output font mapping can be handled separately instead of hard-wired into a single ocp list. Does anyone have that book written on LaTeX by our Greek colleague (I don't remember his name); it has a chapter on Omega and maybe some examples I'll send a note to the aleph list to see if anyone else has used this stuff. Best Idris -- Professor Idris Samawi Hamid Department of Philosophy Colorado State University Fort Collins, CO 80523 -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
On Wed, 16 Aug 2006 09:54:10 -0600, Idris Samawi Hamid
Does anyone have that book written on LaTeX by our Greek colleague (I don't remember his name); it has a chapter on Omega and maybe some examples
Digital Typography Using LaTeX Apostolos Syropoulos http://www.amazon.com/gp/product/0387952179/sr=8-1/qid=1155744248/ref=sr_1_1... -- Professor Idris Samawi Hamid Department of Philosophy Colorado State University Fort Collins, CO 80523 -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Hi, Idris Samawi Hamid wrote:
I'll put something together using the gamma module (see m-gamma.tex and type-omg.tex) sometime today. See also the off-list mail I sent you.
Meanwhile torture.tex has proved to be quite useful. It runs almost OK now (one issue remains afaics). It should be possible to fix that tomorrow, so that at least one non-trivial document is typeset correctly. In case anyone is interested: the biggest problems are caused by the move from the two separate homogenenous 'file i/o' models that are used by pdfTeX and Aleph (resp. bytes and 16-bit shorts) to the variable encoding (utf-8) that is used by LuaTeX internally. For example, when TeX is searching for a control sequence name, it does a sneak past the end of the name, and then it jumps back one item to find the actual last character of the name. This does not work in UTF-8, because if the last character was > 128, it has to back up two, three, or even four items.
This is interesting; it allows one to globally specify the input encoding prior/independent of the ocp-list. I've always just done that inside the ocp list itself in my own work (but then again I use multiple inputs generally).
I've looked at this quite intensively over the past two days, and I propose to drop this entire feature. It seems not to be heavily used, the used names are 'abnormal' for TeX (e.g. \noDefaultInputMode is an actual primitive), the feature is very likely to clash with, as well as complicate, future callbacks to/from Lua scripting, and finally both the interface and the implementation appear to be a badly rushed job or an experiment only. A fresh implementation of file encoding support using Lua makes more sense to me (and will probably take less time to do than fixing the Omega code). Greetings, Taco
Taco,
I would like very much to have a smalll but complete set of test files to make sure that my merge of the Aleph code into LuaTeX is correct. Because the I/O internals are quite different, I have a hard time making sure LuaTeX behaves just like Aleph.
There is a file named torture.tex with lots of Arabic/Farsi/etc. text, but I'm not sure the name describes correctly what the file is. Unfortunately, there is a primitive named \nextfakemath which is not avaliable in Aleph, but if you ignore it this file could help. Javier
participants (3)
-
Idris Samawi Hamid
-
Javier Bezos
-
Taco Hoekwater