
Hi, A while ago there was some demand for language support in tagging and export (which already has some). One approach is to do it automatically, so test {\nl test} test would then be split in pieces but also error prone and maybe non-intentional. But then one can actually wonder about structure there anywaym so this makes more sense: \definehighlight[isdutch] [language=nl,color=darkred] \definehighlight[isgerman] [language=de,color=darkblue] \definestartstop[someczech] [language=cz,color=darkgreen] \definestartstop[somefrench][language=fr,color=darkyellow] % pablo will check the other languages and also the % kind of language tags we want to end up in the pdf % as he will check how viewers 'speak' - yesterday i % suddenly got some english youtube video where the % normally low pitch male voice (that i knew) became % high pitch female dutch for no reason ... and i % never asked for that ... so I fear future pdf % viewers test \isdutch {test} test test \isgerman {test} test test \someczech {test} test test \somefrench{test} test so that is what i added. Now, the question is, what commands (environments) need support for the language key; we have some 70 potential places and 270 possible edits .. oneliners, but kind of boring as we need to expliclty check where it kicks in (probably needs some distracting music on the speaker to get it done) Concerning impact on performance, I tested with 10000 * 2 of the above highlights and got this: % 20K no support : 2.52 % 20K language not set : 2.55 % 20K language set : 3.12 % needs to be compensated so we loose some performance which means i have to gain in back someplace else (kind of regular challenge when adding something) so, the question is 'selective' or 'everywhere' (this is independent of tagging etc ... assuming some structure it's trivial to add there; a general {\nl test} is more tricky as then we need to do more consistency testing, which i got kind of working and then removed again because in the end we then reward sloppy code which is bad and we should stay away from that) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------

On 7/14/25 10:09, Hans Hagen via ntg-context wrote:
Hi, […] % pablo will check the other languages and also the % kind of language tags we want to end up in the pdf
As far as I can remember, ConTeXt differs from ISO 639 with ancient Greek. What ConTeXt considers `agr` is `grc` for ISO 639. As for other languages, which is the file that has the complete list of languages and its tagging values?
% as he will check how viewers 'speak' - yesterday i
Well, I can check that indirectly (not by Acrobat reading it aloud, but with https://ngpdf.com). Linux doesn’t have Acrobat Reader available and the only Windows machine I have access to seems to have no working read aloud feature in Acrobat Reader.
% suddenly got some english youtube video where the % normally low pitch male voice (that i knew) became % high pitch female dutch for no reason ... and i % never asked for that ... so I fear future pdf % viewers
This happened to me with one of the “Invidious” (a YT alternative front-end) instances (it might be https://yewtu.be). About two months ago, I started having English-speaking videos automatically synched into Spanish (by some kind of AI). The voice sounded completely mechanic (and the also changed gender [from female to male, in my case]).
so, the question is 'selective' or 'everywhere'
I think the selective implementation would make more sense, especially when the selection could later include more items (as required/needed by users). BTW, besides alternative and actual text we need to implement `/E` (expansion of acronyms and abbreviations). If we already have `\definesynonyms`, it may be only adding proper taggig for it (even when no `\placelist[synonyms]` is invoked in the document). Many thanks for your help, Pablo

On Mon, Jul 14, 2025 at 16:14 (+0200), Pablo Rodriguez via ntg-context wrote:
On 7/14/25 10:09, Hans Hagen via ntg-context wrote:
Hi, […] % pablo will check the other languages and also the % kind of language tags we want to end up in the pdf
As far as I can remember, ConTeXt differs from ISO 639 with ancient Greek.
What ConTeXt considers `agr` is `grc` for ISO 639.
As for other languages, which is the file that has the complete list of languages and its tagging values?
% as he will check how viewers 'speak' - yesterday i
Well, I can check that indirectly (not by Acrobat reading it aloud, but with https://ngpdf.com).
Linux doesn’t have Acrobat Reader available
The venerable 9.5.5 version for Linux is still available for download (from selected places). It is old, allegedly has some security holes involving accessing network resources, and requires a little tender loving care to get working on 64-bit systems. Having said all that, it is my primary PDF reader. Jim P.S. To stave off the questions about why I use it... It does (far) better font rendering than any of the other Linux programs (excluding web browsers) that I know about. The day someone figures out how to add sub-pixel rendering to any of the other Linux PDF readers might be the day they will be comparable in quality. (Or not.)

On 7/14/2025 5:47 PM, Jim wrote:
On Mon, Jul 14, 2025 at 16:14 (+0200), Pablo Rodriguez via ntg-context wrote:
On 7/14/25 10:09, Hans Hagen via ntg-context wrote:
Hi, […] % pablo will check the other languages and also the % kind of language tags we want to end up in the pdf
As far as I can remember, ConTeXt differs from ISO 639 with ancient Greek.
What ConTeXt considers `agr` is `grc` for ISO 639.
As for other languages, which is the file that has the complete list of languages and its tagging values?
% as he will check how viewers 'speak' - yesterday i
Well, I can check that indirectly (not by Acrobat reading it aloud, but with https://ngpdf.com).
Linux doesn’t have Acrobat Reader available
The venerable 9.5.5 version for Linux is still available for download (from selected places). It is old, allegedly has some security holes involving accessing network resources, and requires a little tender loving care to get working on 64-bit systems. Having said all that, it is my primary PDF reader.
Jim
P.S. To stave off the questions about why I use it... It does (far) better font rendering than any of the other Linux programs (excluding web browsers) that I know about. The day someone figures out how to add sub-pixel rendering to any of the other Linux PDF readers might be the day they will be comparable in quality. (Or not.)
when i'm on linux i use okular as that one render quite ok and on windows sumatra pdf which is also ok (there's also okular n windows which views ok); the mupdf based viewers are fast too rendering pdf (and fonts in general) on windows has always been pretty good (there even was an msdos version by adobe) and cleartype has good anti aliasing on linux it depends on the general setup and i always found that ubuntu (xubuntu) was set up right wrt fonts, others were unbearable which is why i stayed on windows; recently i managed to set up opensuse (we use that on a server) to render fonts ok too, although it differs (and over the year i simply became to sentive for that) wrt acrobat .. it doesn't offer anything i need; ok, it has some javascript features that are handy for presentations (managing layers) but as i don't use it in an edit-view cycle i tend to not use these frequently and i've given up on other viewers being compatible Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------

Hi Jim, On Mon, 2025-07-14 at 12:47 -0300, Jim wrote:
It does (far) better font rendering than any of the other Linux programs (excluding web browsers) that I know about. The day someone figures out how to add sub-pixel rendering to any of the other Linux PDF readers might be the day they will be comparable in quality.
If you're using Gnome, open "Gnome Tweaks" (you might have to install it first), then select both "Fonts > Rendering > Hinting = Full" and "Fonts > Rendering > Antialiasing = Subpixel (for LCD screens)", and then reboot. If you're using Wayland and fractional scaling, adding the following file might also help: # ~/.config/environment.d/wayland.conf MOZ_ENABLE_WAYLAND=1 QT_QPA_PLATFORM=wayland GDK_BACKEND=wayland,x11,* CLUTTER_BACKEND=wayland SDL_VIDEODRIVER=wayland ECORE_EVAS_ENGINE=wayland_egl ELM_ENGINE=wayland_egl QT_AUTO_SCREEN_SCALE_FACTOR=1 QT_ENABLE_HIGHDPI_SCALING=1 Thanks, -- Max

Hi Max, On Tue, Jul 15, 2025 at 02:35 (-0600), Max Chernoff via ntg-context wrote:
On Mon, 2025-07-14 at 12:47 -0300, Jim wrote:
It does (far) better font rendering than any of the other Linux programs (excluding web browsers) that I know about. The day someone figures out how to add sub-pixel rendering to any of the other Linux PDF readers might be the day they will be comparable in quality.
If you're using Gnome, open "Gnome Tweaks" (you might have to install it first), then select both "Fonts > Rendering > Hinting = Full" and "Fonts > Rendering > Antialiasing = Subpixel (for LCD screens)", and then reboot. If you're using Wayland and fractional scaling, adding the following file might also help:
# ~/.config/environment.d/wayland.conf MOZ_ENABLE_WAYLAND=1 QT_QPA_PLATFORM=wayland GDK_BACKEND=wayland,x11,* CLUTTER_BACKEND=wayland SDL_VIDEODRIVER=wayland ECORE_EVAS_ENGINE=wayland_egl ELM_ENGINE=wayland_egl QT_AUTO_SCREEN_SCALE_FACTOR=1 QT_ENABLE_HIGHDPI_SCALING=1
Thanks for the information. I am using neither gnome now wayland, but the non-PDF-reader programs I just examined (urxvt, emacs, firefox, xclock, fvwm3, ...) all cheerfully do sub-pixel rendering with the setup I have on my system (Slackware64 15.0). However, the PDF viewers on my system (evince, kpdf, okular, mupdf) resolutely refuse to do sub-pixel rendering. Now, it is possible that for some reason the PDF readers need a font config setting (or multiple settings) that turn on SPR for them, even though various and sundry other programs do SPR. Might I ask you (a) To confirm that your PDF reader does, indeed, do SPR? (I.e., not just everything else on your system.) and (b) Specifically, what PDF reader are you using that does do SPR? Cheers. Jim

Hi Jim, On Tue, 2025-07-15 at 10:06 -0300, Jim wrote:
Might I ask you (a) To confirm that your PDF reader does, indeed, do SPR? (I.e., not just everything else on your system.) and
Ah, good point, I should have checked first. Using the following test file: \loadtypescriptfile[plex] \setupbodyfont[plex-thin, sans] \setupinterlinespace[1sp] \define[1]\makeline{% \setupbodyfont[#1pt]% \dorecurse{ \numexpression(\textwidth / \widthofstring{l}) - 1\relax }{l\hfill}% \unskip% \par% } \define\makelines{% \processcommalist[2, 4, 6, 8, 10, 11, 12, 16, 24, 36, 72]\makeline% } \startTEXpage[width=6in] \makelines \startframedtext[ offset=0pt, width=broad, background=color, backgroundcolor=black, color=white, ] \makelines \stopframedtext \stopTEXpage Chromium and Firefox (pdf.js) use subpixel rendering, while Evince, Okular, MuPDF, and xpdf just use greyscale antialiasing. I usually use Firefox to view PDFs, and everything else on my system uses subpixel rendering, so I just assumed that the rest of the PDF viewers did as well. Thanks, -- Max

On 7/16/2025 1:14 AM, Max Chernoff via ntg-context wrote:
Hi Jim,
On Tue, 2025-07-15 at 10:06 -0300, Jim wrote:
Might I ask you (a) To confirm that your PDF reader does, indeed, do SPR? (I.e., not just everything else on your system.) and
Ah, good point, I should have checked first. Using the following test file:
\loadtypescriptfile[plex]
\setupbodyfont[plex-thin, sans] \setupinterlinespace[1sp]
\define[1]\makeline{% \setupbodyfont[#1pt]% \dorecurse{ \numexpression(\textwidth / \widthofstring{l}) - 1\relax }{l\hfill}% \unskip% \par% }
\define\makelines{% \processcommalist[2, 4, 6, 8, 10, 11, 12, 16, 24, 36, 72]\makeline% }
\startTEXpage[width=6in] \makelines
\startframedtext[ offset=0pt, width=broad, background=color, backgroundcolor=black, color=white, ] \makelines \stopframedtext \stopTEXpage
Chromium and Firefox (pdf.js) use subpixel rendering, while Evince, Okular, MuPDF, and xpdf just use greyscale antialiasing. I usually use Firefox to view PDFs, and everything else on my system uses subpixel rendering, so I just assumed that the rest of the PDF viewers did as well.
all side effects of these pattents involved ... pathetic large company policies ... esp given how trhey benefit from open source (and we're not even talking stuff that can't be invented multiple times at the same place independently) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------

Hi Hans (et al), On Wed, Jul 16, 2025 at 08:33 (+0200), Hans Hagen via ntg-context wrote:
On 7/16/2025 1:14 AM, Max Chernoff via ntg-context wrote:
Hi Jim,
On Tue, 2025-07-15 at 10:06 -0300, Jim wrote:
Might I ask you (a) To confirm that your PDF reader does, indeed, do SPR? (I.e., not just everything else on your system.) and
Ah, good point, I should have checked first. Using the following test file:
\loadtypescriptfile[plex]
\setupbodyfont[plex-thin, sans] \setupinterlinespace[1sp]
\define[1]\makeline{% \setupbodyfont[#1pt]% \dorecurse{ \numexpression(\textwidth / \widthofstring{l}) - 1\relax }{l\hfill}% \unskip% \par% }
\define\makelines{% \processcommalist[2, 4, 6, 8, 10, 11, 12, 16, 24, 36, 72]\makeline% }
\startTEXpage[width=6in] \makelines
\startframedtext[ offset=0pt, width=broad, background=color, backgroundcolor=black, color=white, ] \makelines \stopframedtext \stopTEXpage
Chromium and Firefox (pdf.js) use subpixel rendering, while Evince, Okular, MuPDF, and xpdf just use greyscale antialiasing. I usually use Firefox to view PDFs, and everything else on my system uses subpixel rendering, so I just assumed that the rest of the PDF viewers did as well.
all side effects of these pattents involved ... pathetic large company policies ... esp given how trhey benefit from open source (and we're not even talking stuff that can't be invented multiple times at the same place independently)
Given "all" (most of) the other programs on my system (and, apparently, Max's system) cheerfully do SPR, I don't believe that there are (for the last 5 or 20 years, anyway) any patents encumbering PDF readers. Rather, what I discovered when this problem first really started to annoy me, at least according to what I read on the internet (so take what I write here with a grain of salt!), some libraries used by Linux PDF readers (reportedly pango, cairo and/or their bastard offspring pangocairo) just aren't able to do sub-pixel rendering to the canvas upon which the PDF output is drawn. Why this restriction was there in the first place (maybe patents in the last millennium?) I don't know. Why the restriction has not been removed is yet another thing I don't know. Yet another place where my lack of knowledge is both broad and deep. :-) Jim

Hi Max (et al), On Tue, Jul 15, 2025 at 17:14 (-0600), Max Chernoff via ntg-context wrote:
On Tue, 2025-07-15 at 10:06 -0300, Jim wrote:
Might I ask you (a) To confirm that your PDF reader does, indeed, do SPR? (I.e., not just everything else on your system.) and
Ah, good point, I should have checked first. Using the following test file:
\loadtypescriptfile[plex]
\setupbodyfont[plex-thin, sans] \setupinterlinespace[1sp]
\define[1]\makeline{% \setupbodyfont[#1pt]% \dorecurse{ \numexpression(\textwidth / \widthofstring{l}) - 1\relax }{l\hfill}% \unskip% \par% }
\define\makelines{% \processcommalist[2, 4, 6, 8, 10, 11, 12, 16, 24, 36, 72]\makeline% }
\startTEXpage[width=6in] \makelines
\startframedtext[ offset=0pt, width=broad, background=color, backgroundcolor=black, color=white, ] \makelines \stopframedtext \stopTEXpage
Interesting test page. More on that below (*).
Chromium and Firefox (pdf.js) use subpixel rendering,
Indeed they do, to their credit.
while Evince, Okular, MuPDF, and xpdf just use greyscale antialiasing.
Better than nothing. A bit. I guess. :-)
I usually use Firefox to view PDFs, and everything else on my system uses subpixel rendering, so I just assumed that the rest of the PDF viewers did as well.
If only. :-( (*) Your example shows one of the other weak points of viewers like evince. When I display the output with both acroread (or ffx) and evince, the "white on black" portion looks good in acroread, in that each line of "L"s has a more or less uniform grey level. But in evince (on my screen, anyway, YMMV), the first three lines (as well as the seventh) are very noticeably less bright than the others. Acroread: good. Evince: bad. I have found in the past (when putting white text on a dark-coloured background for presentations) that most (Linux-based, anyway) PDF readers do a bad job of rendering text in this situation. Consequently, every time I see someone extolling the virtues of evince, kpdf, and/or their other PDF reading program(s), I consider that I might have a badly-configured system and therefore I'm missing out because of that. But further enquiry has, to date, always confirmed my suspicions that evince and friends don't do SPR, nor render "white on dark" well. Thus I carry on using my ancient but venerable acroread 9.5.5., awaiting that glorious future day when more PDF readers learn how to do SPR. Cheers. Jim

On 7/14/2025 4:14 PM, Pablo Rodriguez via ntg-context wrote:
On 7/14/25 10:09, Hans Hagen via ntg-context wrote:
Hi, […] % pablo will check the other languages and also the % kind of language tags we want to end up in the pdf
As far as I can remember, ConTeXt differs from ISO 639 with ancient Greek.
What ConTeXt considers `agr` is `grc` for ISO 639.
As for other languages, which is the file that has the complete list of languages and its tagging values?
That file never made it to a stable so it's on my machine but in the meantiem we can use llg files (language goodies) so we can for instance make lang-agr.llg return { name = "ancient greek", version = "1.00", comment = "Some old greek stuff", author = "Hans Hagen", options = { -- none so far }, tags = { pdf = "grc", } } which I can then load. We only need these for outliers. I'll add that possibility (assuming that you will check the relevant pdf tags). It can then in due time also replace the never really used language association feature.
Well, I can check that indirectly (not by Acrobat reading it aloud, but with https://ngpdf.com).
I happily leave that you you to test.
so, the question is 'selective' or 'everywhere'
I think the selective implementation would make more sense, especially when the selection could later include more items (as required/needed by users).
this is more a feature in the category of style and color and we can consider it also a bit natural to structure things like a description (alt) are more obscure and specific so these should be very selective (irr we have some wrapper that actually sets that property in pdf but it might not be hooked into tagging); i tend to see tagging completely decoupled from pdf and couldn't motivate myself to deal with it otherwise (just like in mkii we had decoupled backends which is why we always had kind of generic backend interfaces in context, not that it matters much today because dvi adn its special variants have gone out of scope)
BTW, besides alternative and actual text we need to implement `/E` (expansion of acronyms and abbreviations).
i remember seeing that and wondering why that was useful ... if acronyms are a problem then they should not be used; one can have a list of meanings anyway
If we already have `\definesynonyms`, it may be only adding proper taggig for it (even when no `\placelist[synonyms]` is invoked in the document).
this has nothing to do with tagging, there could be some rollover feature but those features have always been unreliable; it's in the same ballpark as this automatic url recognition it is not that hard to implement (although one has to render the meaning into something useful than is pure text, so basically like bookmarks) but there is no need to show off here for "the sake of it can be done" Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------

On 7/14/25 17:49, Hans Hagen via ntg-context wrote:
What ConTeXt considers `agr` is `grc` for ISO 639.
That file never made it to a stable so it's on my machine but in the meantiem we can use llg files (language goodies) so we can for instance make [...] which I can then load. We only need these for outliers. I'll add that possibility (assuming that you will check the relevant pdf tags). It can then in due time also replace the never really used language association feature.
Hi Hans, do you want single files (such as `lang-agr.llg`) or just the ConTeXt to BCP 47 mapping? To see how language tagging differs, just reviewing `lang-def.mkxl`, I see that https://www.rfc-editor.org/rfc/rfc5646.html#page-7 (BCP 47/RFC 5646) recommends country codes in uppercase. So we need (to play safe, also according to https://schneegans.de/lv/): en-us: en-US en-gb: en-gb de-de: de-DE de-at: de-AT de-ch: de-ch The following may be defined as: deo: de-1901 ua: uk sr-latn: sr-Latn sr-cyrl: sr-Cyrl cnr-latn: cnr-Latn cnr-cyrl: cnr-Cyrl farsi: fa ar-ae: ar-AE ar-bh: ar-BH ar-eg: ar-EG ar-in: ar-IN ar-kw: ar-KW ar-ly: ar-LY ar-om: ar-OM ar-qa: ar-QA ar-sa: ar-SA ar-sd: ar-SD ar-tn: ar-TN ar-ye: ar-YE ar-sy: ar-SY ar-iq: ar-IQ ar-jo: ar-JO ar-lb: ar-LB ar-dz: ar-DZ ar-ma: ar-MA cn: zh kr: ko gr: el agr: grc pt-br: pt-BR es-es: es-ES es-la: es-419 mo: ro * `mo` has been deprecated and been merged into `ro` (https://iso639-3.sil.org/code/ron). There is a reference to Latvian on line 638 (btw, `lv`), but I couldn’t find any `\installlanguage` for it. I have two questions: Do you want the tags above in other format (such as `lang-agr.llg`)? I think these are the language-related tags required for ConTeXt, am I missing something or do you want another tags? Many thanks for your help, Pablo

On Tue, Jul 15, 2025 at 08:04:39PM +0200, Pablo Rodriguez via ntg-context wrote:
To see how language tagging differs, just reviewing `lang-def.mkxl`, I see that https://www.rfc-editor.org/rfc/rfc5646.html#page-7 (BCP 47/RFC 5646) recommends country codes in uppercase.
Matching is case-insensitive, the recommendation is only a convention, it's no safer to follow it than to write all-lowercase. Arthur

On 7/17/25 01:02, Arthur Rosendahl wrote:
On Tue, Jul 15, 2025 at 08:04:39PM +0200, Pablo Rodriguez via ntg-context wrote:
[…] (BCP 47/RFC 5646) recommends country codes in uppercase.
Matching is case-insensitive, the recommendation is only a convention,
Many thanks for your reply, Arthur. Of course, a recomendation is never a requirement.
it's no safer to follow it than to write all-lowercase.
Excuse my extremely simplistic approach: following recommendations may avoid extra issues with ConTeXt-generated documents. If for whatever reason any PDF browser (Acrobat included) is case-sensitive for country codes in uppercase chars, lowercase chars will be problematic. If any PDF browser (or all of them) is (are) case-insensitive for country codes, it won’t hurt to have them in uppercase. Many thanks for your help, Pablo

Hi Pablo, On Thu, Jul 17, 2025 at 06:41:21PM +0200, Pablo Rodriguez via ntg-context wrote:
Excuse my extremely simplistic approach: following recommendations may avoid extra issues with ConTeXt-generated documents.
If for whatever reason any PDF browser (Acrobat included) is case-sensitive for country codes in uppercase chars, lowercase chars will be problematic.
If any PDF browser (or all of them) is (are) case-insensitive for country codes, it won’t hurt to have them in uppercase.
In an ideal world, I would of course agree with you that we should follow the recommendation of the “source standards” of BCP 47 (ISOs 639, 3166 and 15924, in particular). However, I know from experience that Hans is more likely to listen to advice that tells him to put everything in lowercase ;-) so in a sense, it does hurt to put country codes in uppercase or script codes in title case. Best, Arthur

On 7/18/25 18:24, Arthur Rosendahl wrote:
[…] In an ideal world, I would of course agree with you that we should follow the recommendation of the “source standards” of BCP 47 (ISOs 639, 3166 and 15924, in particular). However, I know from experience that Hans is more likely to listen to advice that tells him to put everything in lowercase ;-) so in a sense, it does hurt to put country codes in uppercase or script codes in title case.
Many thanks for your reply, Arthur. In theory, I totally agree with you (and practically, I don’t have anything against your sound reasoning [besides being the one checking that particular piece of tagging]). It is practice what makes me give some warnings (I’m afraid). In late 2023, Hans announced digital signatures in ConTeXt (OpenSSL doing the crypto part). I started checking them six months later (and about a year ago). I had a persistent issue: even if the signed PDF document was perfectly valid by the Arlington PDF model, Acrobat Reader wasn’t able to display the signature and prompted for saving the document (secretly removing it). It was a real problem, since it was extremely easy to loose a signed PDF document and the most common PDF viewer wasn’t displaying the signature. It turned out `/SigFlags 3` was missing in the `/Catalog/AcroForm` dictionary (to specify that de document contained at least a single signature, in order to both display the signature field panel and save only by append to the document [incremental updates]). https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandards/PDF32000_2008.... makes it clear that this is optional. This doesn’t seem to have been modified in PDF-2.x. Of course, as I got in one reply, `/SigFlags` in an already signed PDF document was only an implementation choice. But our only option was to get signatures displayed and keep them in their documents. That being said, although I tend to think that the case recomendations in BCP 47 might improve readability, I’m totally indifferent to cases in language codes (as long as they work). Best wishes, Pablo
participants (6)
-
Arthur Rosendahl
-
Hans Hagen
-
Hans Hagen
-
Jim
-
Max Chernoff
-
Pablo Rodriguez