Dear list, I have the following sample: \def\ThisOption{ab} \def\ThatOption{ábc} \starttext \executesystemcommand{contextjit --purgeall --arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}" second.tex} \contextversion \stoptext The contents of second.tex read: \starttext \enablemode[\env{OptionThis}] \enablemode[\env{OptionThat}] This: \doifmodeelse{ab}{enabled}{disabled}.\par That: \doifmodeelse{ábc}{enabled}{disabled}. \stoptext I use --arguments to pass modes to documents compiled via \executesystemcommand. Everything worked fine. This morning I updated ConTeXt at work (with Win7) and modes with non-ASCII chars aren’t recognized. Could anyone confirm the issue I’m describing in Windows? Is there any ConTeXt command (or Lua function) that translates non-ASCII chars to their ASCII values? Many thanks for your help, Pablo -- http://www.ousia.tk
Pablo Rodriguez schrieb am 02.12.2019 um 16:18:
Dear list,
I have the following sample:
\def\ThisOption{ab} \def\ThatOption{ábc} \starttext \executesystemcommand{contextjit --purgeall --arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}" second.tex}
Limited alternative (no rerun when second.pdf exists and by default the resulting PDF is loaded as image). \typesetfile[second.tex][--arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}"][object=no]
\contextversion \stoptext
The contents of second.tex read:
\starttext \enablemode[\env{OptionThis}] \enablemode[\env{OptionThat}] This: \doifmodeelse{ab}{enabled}{disabled}.\par That: \doifmodeelse{ábc}{enabled}{disabled}. \stoptext
I use --arguments to pass modes to documents compiled via \executesystemcommand.
Everything worked fine. This morning I updated ConTeXt at work (with Win7) and modes with non-ASCII chars aren’t recognized.
Could anyone confirm the issue I’m describing in Windows?
I get the same results with MkIV but LMTX works.
Is there any ConTeXt command (or Lua function) that translates non-ASCII chars to their ASCII values?
Lua: characters.shaped(...) Wolfgang
On 12/2/19 6:01 PM, Wolfgang Schuster wrote:
Pablo Rodriguez schrieb am 02.12.2019 um 16:18:
\starttext \executesystemcommand{contextjit --purgeall --arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}" second.tex}
Limited alternative (no rerun when second.pdf exists and by default the resulting PDF is loaded as image).
\typesetfile[second.tex][--arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}"][object=no]
Many thanks for your reply, Wolfgang. This is an interesting way to avoid the issue.
Could anyone confirm the issue I’m describing in Windows?
I get the same results with MkIV but LMTX works.
I’d love to switch to LMTX, but this isn’t an option for me. It has an issue with some fonts (I already reported). In this sample \ss and \cg fonts cannot be used, even invoking them by file name: \definefontfamily[mainface][rm][TeX Gyre Heros] \definefontfamily[mainface][mm][TeX Gyre Termes Math] \definefontfamily[mainface][ss][Gill Sans MT][tf=file:GIL_____.TTF] \definefontfamily[mainface][tt][Cousine] \definefontfamily[mainface][cg][Arial Narrow][tf=file:ARIALN.TTF] \setupbodyfont[mainface] \startbuffer \ConTeXt\ is awesome!\par \stopbuffer \starttext \startTEXpage[offset=1em] \getbuffer \ss\getbuffer \tt\getbuffer \cg\getbuffer \stopTEXpage \stoptext https://mailman.ntg.nl/pipermail/ntg-context/2019/094953.html https://mailman.ntg.nl/pipermail/ntg-context/2019/095353.html
Is there any ConTeXt command (or Lua function) that translates non-ASCII chars to their ASCII values?
Lua: characters.shaped(...)
Many thanks for your help, Pablo -- http://www.ousia.tk
Pablo Rodriguez schrieb am 02.12.2019 um 18:46:
Could anyone confirm the issue I’m describing in Windows? I get the same results with MkIV but LMTX works. I’d love to switch to LMTX, but this isn’t an option for me. It has an issue with some fonts (I already reported).
In this sample \ss and \cg fonts cannot be used, even invoking them by file name:
\definefontfamily[mainface][rm][TeX Gyre Heros] \definefontfamily[mainface][mm][TeX Gyre Termes Math] \definefontfamily[mainface][ss][Gill Sans MT][tf=file:GIL_____.TTF] \definefontfamily[mainface][tt][Cousine] \definefontfamily[mainface][cg][Arial Narrow][tf=file:ARIALN.TTF] \setupbodyfont[mainface] \startbuffer \ConTeXt\ is awesome!\par \stopbuffer \starttext \startTEXpage[offset=1em] \getbuffer \ss\getbuffer \tt\getbuffer \cg\getbuffer \stopTEXpage \stoptext
LMTX finds the fonts and loads them but nothing appears in the final PDF. \nopdfcompression \starttext \definedfont[file:arialn.ttf*default]Arial Narrow \stoptext A alternative to Arial Narrow is Arial Nova which has condensed variant and comes as part of the pan-european font pack in windows 10. \startmkivmode \definefontfamily [arial-narrow] [ss] [Arial Narrow] \definefontfamily [gill-sans] [ss] [Gill Sans MT] \stopmkivmode \startlmtxmode \definefontfamily [arial-narrow] [ss] [Arial Nova] [tf=style:condensed, it=style:condenseditalic, bf=style:condensedbold, bi=style:condensedbolditalic] \definefontfamily [gill-sans] [ss] [Gill Sans Nova] \stoplmtxmode \starttext regular \italic{italic} \bold{bold} \bolditalic{bolditalic} \switchtobodyfont[arial-narrow] regular \italic{italic} \bold{bold} \bolditalic{bolditalic} \switchtobodyfont[gill-sans] regular \italic{italic} \bold{bold} \bolditalic{bolditalic} \stoptext Wolfgang
On 12/2/19 9:45 PM, Wolfgang Schuster wrote:
[...] LMTX finds the fonts and loads them but nothing appears in the final PDF.
\nopdfcompression
\starttext \definedfont[file:arialn.ttf*default]Arial Narrow \stoptext
Many thanks for your reply, Wolfgang. I’m afraid there is no font there. Compiling the source above with MkIV generates a PDF document with the object 1 0 obj, which contains a /Font dictionary including two /Type0 font dictionaries. Compiling the source above with LMTX the resulting PDF document contains a /Page dictionary (6 0 obj) with /Font dictionary within. This dictionary refers to an existing /Type0 font dictionary (1 0 obj) and a missing "2 0 obj". I don’t know why "2 0 obj" isn’t generated, but this should be the missing font.
An alternative to Arial Narrow is Arial Nova which has condensed variant and comes as part of the pan-european font pack in windows 10.
I’m afraid a new font is not an option for me at work. Many thanks for your help, Pablo -- http://www.ousia.tk
On 12/3/2019 8:37 PM, Pablo Rodriguez wrote:
On 12/2/19 9:45 PM, Wolfgang Schuster wrote:
[...] LMTX finds the fonts and loads them but nothing appears in the final PDF.
\nopdfcompression
\starttext \definedfont[file:arialn.ttf*default]Arial Narrow \stoptext
Many thanks for your reply, Wolfgang.
I’m afraid there is no font there.
Compiling the source above with MkIV generates a PDF document with the object 1 0 obj, which contains a /Font dictionary including two /Type0 font dictionaries.
Compiling the source above with LMTX the resulting PDF document contains a /Page dictionary (6 0 obj) with /Font dictionary within. This dictionary refers to an existing /Type0 font dictionary (1 0 obj) and a missing "2 0 obj".
I don’t know why "2 0 obj" isn’t generated, but this should be the missing font.
I don't have that font. The version I have here works ok.
An alternative to Arial Narrow is Arial Nova which has condensed variant and comes as part of the pan-european font pack in windows 10.
I’m afraid a new font is not an option for me at work. But there are better (unicode) versions available on windows, why nmot use those then (as wolfgang pointed out).
Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------
On 12/3/19 10:14 PM, Hans Hagen wrote:
On 12/3/2019 8:37 PM, Pablo Rodriguez wrote:
[...] I’m afraid a new font is not an option for me at work. But there are better (unicode) versions available on windows, why nmot use those then (as wolfgang pointed out).
Many thanks for your reply, Hans. I would use the Unicode versions at home (if I had a Windows license), but at work it is problematic even to ask for free fonts. My company may be to strict with these policies, but I don’t think it is a total exception in this matter. I hope we might switch to Windows 10 in the no so near future 😃. Many thanks for your help, Pablo -- http://www.ousia.tk
On 12/4/2019 10:19 PM, Pablo Rodriguez wrote: > On 12/3/19 10:14 PM, Hans Hagen wrote: >> On 12/3/2019 8:37 PM, Pablo Rodriguez wrote: >>> [...] >>> I’m afraid a new font is not an option for me at work. >> But there are better (unicode) versions available on windows, why nmot >> use those then (as wolfgang pointed out). > > Many thanks for your reply, Hans. > > I would use the Unicode versions at home (if I had a Windows license), > but at work it is problematic even to ask for free fonts. > > My company may be to strict with these policies, but I don’t think it is > a total exception in this matter. > > I hope we might switch to Windows 10 in the no so near future 😃. - if you use tex, use fonts that are put in the tex tree, the only guarantee you have for continuity - i checked a few versions of that font on an old windows xp vm and it works ok ... so does your company ever update the machines? Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------
On 12/5/19 9:23 AM, Hans Hagen wrote: > On 12/4/2019 10:19 PM, Pablo Rodriguez wrote: >> [...] >> I hope we might switch to Windows 10 in the no so near future 😃. > - if you use tex, use fonts that are put in the tex tree, the only > guarantee you have for continuity. > > - i checked a few versions of that font on an old windows xp vm and it > works ok ... so does your company ever update the machines? Hi Hans, I wonder whether my issue with the fonts might be caused by my ConTeXt distribution (such as the one with expansion, reported some ). Windows is updated with security patches (or whichever the name might be). If I try to explain to the IT people that a font is misbehaving and an application (I don’t even try to explain what ConTeXt actually is [they won’t understand]) cannot recognize it, they will reply that the application is wrong. Because of that, I’m not even trying to explain what is wrong with the font at all. In my previous company, I remember a guy (which had as slightly idea of what TeX was [he used LaTeX for his PhD decades ago]) with a strong background in computer science. He was impressed that I “developed” a system with no coding knowledge, but he objected that ConTeXt wasn’t standard software (the standard for him was OpenOffice.org). I replied that the standard was in the output PDF documents (which were PDF/A-3a). (I didn’t mention that working with OOo was a real pain and that trying to write conditionals with document merging was extremely annoying for me.) Many thanks for your help and your new betas, Pablo -- http://www.ousia.tk
On Thu, Dec 05, 2019 at 08:31:45PM +0100, Pablo Rodriguez wrote:
He was impressed that I “developed” a system with no coding knowledge, but he objected that ConTeXt wasn’t standard software (the standard for him was OpenOffice.org). I replied that the standard was in the output PDF documents (which were PDF/A-3a).
And what did he have to say to that?
(I didn’t mention that working with OOo was a real pain and that trying to write conditionals with document merging was extremely annoying for me.)
You should have. Best, Arthur
On 12/6/19 10:01 AM, Arthur Reutenauer wrote:
On Thu, Dec 05, 2019 at 08:31:45PM +0100, Pablo Rodriguez wrote:
He was impressed that I “developed” a system with no coding knowledge, but he objected that ConTeXt wasn’t standard software (the standard for him was OpenOffice.org). I replied that the standard was in the output PDF documents (which were PDF/A-3a).
And what did he have to say to that?
He argued that if any other person had to improve the system, (s)he would have to learn everything. While I cannot disagree with that, I’m afraid that this would happen, no matter which software I deployed (library, tool, or anything else). I must admit that altough I cannot code, GUIs distract me when working with text. But this would be my limitation.
(I didn’t mention that working with OOo was a real pain and that trying to write conditionals with document merging was extremely annoying for me.)
You should have.
I know, but he was one of my bosses. And I remember I was totally shocked when he explained to me that the standard document format for any word processor was OpenOffice.org. I cannot recall the accurate details from his explanation, but he seemed to think that even Microsoft Word was at fault for not implementing the Open Document Text format (.odt) as its native format. The reasoning was so bizarre and shocking to me that I understood that it was better to me simply to decline the discussion. Also other factual inaccuracies about the work done in other departments made me clear that it was better to avoid the conversation. OOo was the cause of many compatibility issues in that company, because they simply uninstalled Microsoft Office and installed OpenOffice.org (no previous warning) in one evening. The installation was so poorly performed that they forgot to assign Microsoft Office extensions to OOo programs in Windows. AOnly this minor incident was a huge problem for the vast majority of users. It was clear to me what I knew decades ago: free software isn’t programs for free. I think they still have to deal with issues in OOo. I only talked no more than five times with that guy. But if he was the evangelist of free software in that company, I’m afraid I totally agree with the people working there that hated OOo. Pablo -- http://www.ousia.tk
On 12/7/2019 2:40 PM, Pablo Rodriguez wrote:
I know, but he was one of my bosses. And I remember I was totally shocked when he explained to me that the standard document format for any word processor was OpenOffice.org.
Anyone claiming that something is a standard (esp in computer science) is unaware of history. I bet that our ancestors of thousands of years ago also considered themselves modern, with standards etc. But sometimes I think that many folks today think of themselves as being on top of the human (intelligence, progress, morale, etc) pyramid compared to whoever came before. History proved them wrong. (Similar are claims of this or that being better (software, operating systems, etc) while in the end much converges to the same.) Anyway, it's a waste of time and energy discussing with those folks. And we as texies should also be honest: how many of the acclaimed 'happy' tex users are really 'happy' with their system and are those 'millions' really (unforced) users who couldn't as well be using word or google docs or ... given what and how they deal with documents. And let's not add the quality argument because a coupel of weeks ago I noticed how bad tex output didn't look any better in a display of some 80 summaries at some meeting (the, oh, look how bad that table looks experience). (I'm sure Arthur, when reading this, can comment as we sort of had this at a theme of a talk!) (And, yes I consider myself a happy tex user, but I also admit that I don't have to write much. And yes, it's a specific kind of user and usage.)
I cannot recall the accurate details from his explanation, but he seemed to think that even Microsoft Word was at fault for not implementing the Open Document Text format (.odt) as its native format.
I suppose he read the specs of both formats in detail (in print of course).
The reasoning was so bizarre and shocking to me that I understood that it was better to me simply to decline the discussion. Also other factual inaccuracies about the work done in other departments made me clear that it was better to avoid the conversation.
Indeed. Waste of time. Just think of this: you could kind of check his claims, so how about all the other claims someone makes ... stuff you know little about ... how valid are those claims then.
OOo was the cause of many compatibility issues in that company, because they simply uninstalled Microsoft Office and installed OpenOffice.org (no previous warning) in one evening.
Well, as long as they're happy ... in most cases no one cares how output looks, nor cares about long term storage and exchange of data. Going belly up means 'delete all data and thrash the machines'.
The installation was so poorly performed that they forgot to assign Microsoft Office extensions to OOo programs in Windows. AOnly this minor incident was a huge problem for the vast majority of users.
And then they entered denial state.
It was clear to me what I knew decades ago: free software isn’t programs for free. I think they still have to deal with issues in OOo.
Although, not all free software comes for free. I'm not that sure of online tex services are cheaper than bulk microsoft licenses.
I only talked no more than five times with that guy. But if he was the evangelist of free software in that company, I’m afraid I totally agree with the people working there that hated OOo.
It's all about honnesty isn't it? And about people spending time and energy, That doesn't always goes well with commercial objectives. And there's always the knowledge issue. And expecially when open source and such starts looking like a religion (one without a long history of dealing with itself and communicating properly) it gets even trickier. Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------
On 12/7/19 4:32 PM, Hans Hagen wrote:
On 12/7/2019 2:40 PM, Pablo Rodriguez wrote:
I know, but he was one of my bosses. And I remember I was totally shocked when he explained to me that the standard document format for any word processor was OpenOffice.org.
Anyone claiming that something is a standard (esp in computer science) is unaware of history. I bet that our ancestors of thousands of years ago also considered themselves modern, with standards etc. But sometimes I think that many folks today think of themselves as being on top of the human (intelligence, progress, morale, etc) pyramid compared to whoever came before. History proved them wrong. (Similar are claims of this or that being better (software, operating systems, etc) while in the end much converges to the same.)
The main problem with his explanation of the standard was that he mixed both tools and formats. I’m not sure any tool is a standard, the file formrat for some kind of data is relevant to be standarized.
Anyway, it's a waste of time and energy discussing with those folks.
And we as texies should also be honest: how many of the acclaimed 'happy' tex users are really 'happy' with their system and are those 'millions' really (unforced) users who couldn't as well be using word or google docs or ... given what and how they deal with documents. And let's not add the quality argument because a coupel of weeks ago I noticed how bad tex output didn't look any better in a display of some 80 summaries at some meeting (the, oh, look how bad that table looks experience).
I must admit I’m having my worst time since I started using TeX (decades ago), because some documents I generate have simply unreadable parts and they are simply and plainly wrong (even considering the lowest standards). It makes me suspect that one of approaches to basic functionality could be flawed. But I don’t doubt ConTeXt is the most useful piece of software I use on a daily basis (followed by git and pandoc [probably in that order]), because it makes me achieve things I couldn’t do using other tools. I’m extremely confortable using it (although I report all bugs I hit). And I think it is worth the effort to investigate further to contribute to the fix to the problem described above (even if it drives me crazy).
(And, yes I consider myself a happy tex user, but I also admit that I don't have to write much. And yes, it's a specific kind of user and usage.)
Well, Hans, this specific kind of user and usage is called LuaTeX, LuaMetaTeX and ConTeXt development.
I cannot recall the accurate details from his explanation, but he seemed to think that even Microsoft Word was at fault for not implementing the Open Document Text format (.odt) as its native format.
I suppose he read the specs of both formats in detail (in print of course).
His reasoning was flawed from the start. You cannot complain an egg is not an elephant. Well, Microsoft Word format may be propietary (it is actually a standard), but there is nothing wrong in being different from another text document format. XML may have its flaws, but it is perfectly fine that it isn’t PostScript. (Otherwise, what the reasoning demands is the removal of XML as format.)
The reasoning was so bizarre and shocking to me that I understood that it was better to me simply to decline the discussion. Also other factual inaccuracies about the work done in other departments made me clear that it was better to avoid the conversation.
Indeed. Waste of time. Just think of this: you could kind of check his claims, so how about all the other claims someone makes ... stuff you know little about ... how valid are those claims then.
I perfectly recall one factual statement he did about one detail from another department. I told him that I worked two years ago there and I thought that was different. After leaving his office, I went to the department and checked the detail. He was wrong and I thought that I should avoid him (just in case he wanted to discuss general questions).
The installation was so poorly performed that they forgot to assign Microsoft Office extensions to OOo programs in Windows. AOnly this minor incident was a huge problem for the vast majority of users.
And then they entered denial state.
They didn’t even need that. They didn’t experienced the problems, since the users didn’t complain to them (installation was remote).
It was clear to me what I knew decades ago: free software isn’t programs for free. I think they still have to deal with issues in OOo.
Although, not all free software comes for free. I'm not that sure of online tex services are cheaper than bulk microsoft licenses.
I know and the problem with that migration was they wanted to avoid spending in the license fees.
I only talked no more than five times with that guy. But if he was the evangelist of free software in that company, I’m afraid I totally agree with the people working there that hated OOo.
It's all about honnesty isn't it? And about people spending time and energy, That doesn't always goes well with commercial objectives. And there's always the knowledge issue. And expecially when open source and such starts looking like a religion (one without a long history of dealing with itself and communicating properly) it gets even trickier.
I don’t have any problem with OOo myself, but I understand that people thought that OOo was a cheap and bad alternative to Microsoft Office. The problem with the way they used OOo is that you shouldn’t expect that bugs disappear by magic. At least, start learning how to report the bugs (to report them next). As ideology, I think free software is most problematic. I agree that free software may be a good contribution to society, but it is no moral problem at all. (Probably we should start questioning why we apply copyright law to trade secrets.) Pablo -- http://www.ousia.tk
free software may be a good contribution to society, but it is no moral problem at all. (Probably we should start questioning why we apply copyright law to trade secrets.) What always puzzles me is that companies that otherwise depend on open
On 12/7/2019 11:49 PM, Pablo Rodriguez wrote: source have no problems with quite rigourous copyright claims, even go for lawsuits, and don't care about violating privacy. And I also have not much sympathy for companies that start using free software, boast about it, kind of use the community and then cash out, quit support, put the usefull stuff in expensive variants only and in the end leave users in the cold. I have no problem with commercial activities but they should be open and can not be hidden. Btw, it's one reason why open source / free sw can fail in cases: support can be more expensive than for commercial stuff, although often tex is dirt cheap, if only because much can be achieved with little coding and once it works, it works forever. One problem in your case is that managers probably don't make a proper cost-benefit analysis. Maybe we should make some templates for that some day. I wonder how much backslash - pun intended - there is because of all that pseudo open source. Makes a nice topic for late-night discussions at a next ctx meeting. Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------
On 12/2/19 6:01 PM, Wolfgang Schuster wrote:
[...] Limited alternative (no rerun when second.pdf exists and by default the resulting PDF is loaded as image).
\typesetfile[second.tex][--arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}"][object=no]
Hi Wolfgang, is there any way to add more options to the typeset file? I need to setup "--result", "--purgeall" (and maybe another one [I cannot remember, I’m not in front of the computer at work]). Many thanks for your help, Pablo -- http://www.ousia.tk
On 12/2/2019 4:18 PM, Pablo Rodriguez wrote:
Dear list,
I have the following sample:
\def\ThisOption{ab} \def\ThatOption{ábc} \starttext \executesystemcommand{contextjit --purgeall --arguments="OptionThis={\ThisOption},OptionThat={\ThatOption}" second.tex}
\contextversion \stoptext
The contents of second.tex read:
\starttext \enablemode[\env{OptionThis}] \enablemode[\env{OptionThat}] This: \doifmodeelse{ab}{enabled}{disabled}.\par That: \doifmodeelse{ábc}{enabled}{disabled}. \stoptext
I use --arguments to pass modes to documents compiled via \executesystemcommand.
Everything worked fine. This morning I updated ConTeXt at work (with Win7) and modes with non-ASCII chars aren’t recognized.
Could anyone confirm the issue I’m describing in Windows?
Is there any ConTeXt command (or Lua function) that translates non-ASCII chars to their ASCII values? define ASCII .. there are no accented characters in ASCII
windows uses code pages so it all depends on what trickles down to the command processor; here you pass utf that then gets interpreted depending on your environment as you use luajittex it depends on what the windows binary does (i remember reading that something was changed in the native windows binaries because it was needed/decided at the latex end) ... one reason more to never use non-ascii for critital stuff (lots of chicken egg issues there, kind of guaranteed fix this, break that) anyway, in luametatex with lmtx we're (hopefully) code page neutral (as far as i could test; all utf8 and windows utf16) and we're not going to touch the default luatex internals like that Hans (windows 7 is kind of outdated so you can't expect someone to check that setup out) ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------
On 12/2/19 6:05 PM, Hans Hagen wrote:
[...] anyway, in luametatex with lmtx we're (hopefully) code page neutral (as far as i could test; all utf8 and windows utf16) and we're not going to touch the default luatex internals like that
Many thanks for your reply, Hans. As written before, I would love to use LMTX, but using some Gill Sans and Arial Narrow is mandatory for us. For some strange reason (as reported in my previous message), LMTX isn’t able to deal with them.
(windows 7 is kind of outdated so you can't expect someone to check that setup out)
I only use Windows at work. All I wanted to know was whether Windows was affected by this issue. Many thanks for your help, Pablo -- http://www.ousia.tk
On 12/2/2019 6:51 PM, Pablo Rodriguez wrote:
On 12/2/19 6:05 PM, Hans Hagen wrote:
[...] anyway, in luametatex with lmtx we're (hopefully) code page neutral (as far as i could test; all utf8 and windows utf16) and we're not going to touch the default luatex internals like that
Many thanks for your reply, Hans.
As written before, I would love to use LMTX, but using some Gill Sans and Arial Narrow is mandatory for us.
For some strange reason (as reported in my previous message), LMTX isn’t able to deal with them.
arial looks ok here ... (2014 version windows) so maybe you should look for a new version than yours
(windows 7 is kind of outdated so you can't expect someone to check that setup out)
I only use Windows at work. All I wanted to know was whether Windows was affected by this issue.
Many thanks for your help,
Pablo -- http://www.ousia.tk ___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://context.aanhet.net archive : https://bitbucket.org/phg/context-mirror/commits/ wiki : http://contextgarden.net ___________________________________________________________________________________
-- ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------
participants (4)
-
Arthur Reutenauer
-
Hans Hagen
-
Pablo Rodriguez
-
Wolfgang Schuster