\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too? Regards, Khaled -- Khaled Hosny Arabic localizer and member of Arabeyes.org team
Khaled Hosny wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
Khaled Hosny wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too. Try generating the attached example with xetex. Regards, Khaled -- Khaled Hosny Arabic localizer and member of Arabeyes.org team
Khaled Hosny wrote:
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
Khaled Hosny wrote: this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too.
I could intercept \pdfinfo, but I suspect there are many more locations (like bookmarks). Hans, would automatic conversion to utf16 be doable?
Try generating the attached example with xetex.
It is a little easier for xetex because it is not normal for xetex users to create their own literal pdf output. In pdftex (and thus luatex) this happens all the time.
Taco Hoekwater wrote:
Khaled Hosny wrote:
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
Khaled Hosny wrote: this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too.
I could intercept \pdfinfo, but I suspect there are many more locations (like bookmarks). Hans, would automatic conversion to utf16 be doable?
i'd rather wait with such things till the backend is redesigned; in principle one can have pdfdoc encoding or unicode in (strings) but also use <hex numbered strings>; it may help at some point to have a string.utf8to16 function which can be used in drivers (going from utf8 to utf16 is somewhat cumbersome in lua), so that backend code could do \pdfinfo{ ... /someentry {\directlua0 {tex.write(utf16bom..string.utf8toutf16{...}} ... } so, no intercept (after all, that would involve parsing and always be moving target) but just a helper (it's already doable but mostly a matter if supporting it -) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
Khaled Hosny wrote:
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
Khaled Hosny wrote: this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too.
in luatex eventually we will have a more generic backend concept and try to minimize the number of specific primitives for instance, at some point we will have something pdf.info being a lua table (representing a dictionary) and then one sets lua strings and these are just sequences of bytes; this is why a helper makes more sense pdf.info.title = string.utf8valueto16be(0xFEFF) .. string.utf8toutf16be(somestring) in the meantime such helpers could also be used in the regular \pdfinfo as taco mentioned, xetex is a different animal .. ok, there could be a primitive doing the conversion, but there the pdf support is drivven by the dvipdfmx backend; also, keep in mind that in practice there are many more places where strings shos up (e.g. in user annotations) and not every string is representing text (currently in context i use hex strings instead) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
On Sat, Jun 28, 2008 at 01:46:17PM +0200, Hans Hagen wrote:
Khaled Hosny wrote:
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
Khaled Hosny wrote: this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too.
in luatex eventually we will have a more generic backend concept and try to minimize the number of specific primitives
for instance, at some point we will have something pdf.info being a lua table (representing a dictionary) and then one sets lua strings and these are just sequences of bytes; this is why a helper makes more sense
pdf.info.title = string.utf8valueto16be(0xFEFF) .. string.utf8toutf16be(somestring)
in the meantime such helpers could also be used in the regular \pdfinfo
as taco mentioned, xetex is a different animal .. ok, there could be a primitive doing the conversion, but there the pdf support is drivven by the dvipdfmx backend; also, keep in mind that in practice there are many more places where strings shos up (e.g. in user annotations) and not every string is representing text
(currently in context i use hex strings instead)
Thanks for the clarification, I'm trying to understand how stuff works and this helps definitely. So, could we've this support at context level, and every one will be happy :) Regards, Khaled -- Khaled Hosny Arabic localizer and member of Arabeyes.org team
Khaled Hosny wrote:
Thanks for the clarification, I'm trying to understand how stuff works and this helps definitely. So, could we've this support at context level, and every one will be happy :)
you can try the beta ... (best move the discussion to the context list since it's not related that much to luatex) i wonder what bidi does in bookmarks and so ... maybe more is needed Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
Khaled Hosny
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
Khaled Hosny wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too.
Yes, hyperref with LaTeX does this, but I recommend looking at _how_ it does this. It is another case of "code nobody but Heiko would come up with and very few people would be able to maintain". -- David Kastrup, Kriemhildstr. 15, 44793 Bochum
On Sun, Jun 29, 2008 at 11:06:03AM +0200, David Kastrup wrote:
Khaled Hosny
writes: On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote:
Khaled Hosny wrote:
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test)
I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do this too.
Yes, hyperref with LaTeX does this, but I recommend looking at _how_ it does this. It is another case of "code nobody but Heiko would come up with and very few people would be able to maintain".
I actually looked before posting here, but I'm yet to understand all this TeX's black magic. Regards, Khaled -- Khaled Hosny Arabic localizer and member of Arabeyes.org team
David Kastrup wrote:
Khaled Hosny
writes: \pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
Khaled Hosny wrote: this is a backend c.q macro package issue ... (btw this can also be done in pdftex, but i never came to enabling the code in context because there was nothign to test) I suspected that, since hyperrref with latex does this, but I found that xetex supports Unicode pdfinfo directly and thought luatex should do
On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote: this too.
Yes, hyperref with LaTeX does this, but I recommend looking at _how_ it does this. It is another case of "code nobody but Heiko would come up with and very few people would be able to maintain".
there are two methods (afaik) ... either pushing utf16 in (strings) or hexed utf16 in <strings>; doing that is not so much the problem (pdfdocencoding is more troublesome); as usign with this kind of data the biggest deal is to intercept / handle content that makes no sense in bookmarks (math, tex commands, unexpanded stuff, etc) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
Hello,
\pdfinfo doesn't seem to support Unicode, when I compile the attached example with luatex I get symbols like هصة instead of the proper (Arabic) Unicode strings. Shouldn't luatex default to Unicode here too?
The following function takes a LuaTeX (UTF-8) string and converts it to UTF-16 with BOM and writes it to TeX (note that it requires a recent beta of LuaTeX, cf. my mail to this list on 8th April 2008): function convertPDFstring(s) -- UTF-16 BOM sprint(char(0x110000 + 254)) sprint(char(0x110000 + 255)) -- The string for c in string.utfvalues(s) do if c < 0x10000 then sprint(char(0x110000 + c / 256)) sprint(char(0x110000 + c % 256)) else c = c - 0x10000 local c1 = c / 1024 + 0xD800 local c2 = c % 1024 + 0xDC00 sprint(char(0x110000 + c1 / 256)) sprint(char(0x110000 + c1 % 256)) sprint(char(0x110000 + c2 / 256)) sprint(char(0x110000 + c2 % 256)) end end end Usage (to set the title of the document): \pdfinfo{/Title(\directlua0{convertPDFstring('my title')})} Jonathan
Jonathan Sauer wrote:
function convertPDFstring(s) -- UTF-16 BOM sprint(char(0x110000 + 254)) sprint(char(0x110000 + 255))
-- The string for c in string.utfvalues(s) do if c < 0x10000 then sprint(char(0x110000 + c / 256)) sprint(char(0x110000 + c % 256)) else c = c - 0x10000 local c1 = c / 1024 + 0xD800 local c2 = c % 1024 + 0xDC00 sprint(char(0x110000 + c1 / 256)) sprint(char(0x110000 + c1 % 256)) sprint(char(0x110000 + c2 / 256)) sprint(char(0x110000 + c2 % 256)) end end end
slightly more efficient ... (char and byte accept multiple arguments) and you can use write which is faster too) local char = unicode.utf8.char local write = tex.write function convertPDFstring(s) write(char(0x110000+254,0x110000+255)) for c in string.utfvalues(s) do if c < 0x10000 then write(char(0x110000+c/256,0x110000+c%256)) else c = c - 0x10000 local c1 = c / 1024 + 0xD800 local c2 = c % 1024 + 0xDC00 write(char(0x110000+c1/256,0x110000+c1%256,0x110000+c2/256,0x110000+c2%256)) end end end
\pdfinfo{/Title(\directlua0{convertPDFstring('my title')})}
in context i use something quick and dirty (no > 0x10000 checking but from your function i can deduce the magic umbers -) function pdf.hexify(str) texwrite("feff") for b in str:utfvalues() do texwrite(("%04x"):format(b)) end end \pdfinfo{/Title(\directlua0{pdf.hexify<'my title'>})} so <> instead of () as string delimiter ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
Hello,
slightly more efficient ... (char and byte accept multiple arguments) and you can use write which is faster too)
Thanks!
in context i use something quick and dirty (no > 0x10000 checking but from your function i can deduce the magic umbers -)
Well, they are simply the ones mentioned in http://en.wikipedia.org/wiki/Utf-16.
function pdf.hexify(str) texwrite("feff") for b in str:utfvalues() do texwrite(("%04x"):format(b)) end end
\pdfinfo{/Title(\directlua0{pdf.hexify<'my title'>})}
so <> instead of () as string delimiter
How does that work? Jonathan
Jonathan Sauer wrote:
Hello,
slightly more efficient ... (char and byte accept multiple arguments) and you can use write which is faster too)
Thanks!
in context i use something quick and dirty (no > 0x10000 checking but from your function i can deduce the magic umbers -)
Well, they are simply the ones mentioned in http://en.wikipedia.org/wiki/Utf-16.
sure, but i didn't realize that a simple / worked ok; trunc/round stuff and so anyhow, a helper function in luatex would be handy, not that this is such a critical issue; in a tex run hardly any utf16 conversion has to take place
function pdf.hexify(str) texwrite("feff") for b in str:utfvalues() do texwrite(("%04x"):format(b)) end end
two variants function pdf.hexify(str) texwrite("feff" .. utf.gsub(str,".",function(c) local b = byte(c) if b < 0x10000 then return ("%04x"):format(b) else return ("%04x%04x"):format(b/1024+0xD800,b%1024+0xDC00) end end)) end function pdf.hexify(str) texwrite("feff") for b in str:utfvalues() do if b < 0x10000 then texwrite(("%04x"):format(b)) else texwrite(("%04x%04x"):format(b/1024+0xD800,b%1024+0xDC00)) end end end
\pdfinfo{/Title(\directlua0{pdf.hexify<'my title'>})}
so <> instead of () as string delimiter
How does that work?
in pdf traditionally a string (that is, the ones that represented bookmarks and such) were in pdf doc encoding, so (pdfdoc encoded string) then they added utf16 support (utf16bom followed by utf16 sequence that's still strings. However, at some point another notation was introduced: <hex sequence> which again is utf16 but this time hex encoded (less efficient but so seldom used that it does not really matter) from (also pdftex's) perspective, both are doable but the hex one is handier when tracing Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
Hans Hagen
Jonathan Sauer wrote:
Hello,
slightly more efficient ... (char and byte accept multiple arguments) and you can use write which is faster too)
Thanks!
in context i use something quick and dirty (no > 0x10000 checking but from your function i can deduce the magic umbers -)
Well, they are simply the ones mentioned in http://en.wikipedia.org/wiki/Utf-16.
sure, but i didn't realize that a simple / worked ok; trunc/round stuff and so
Oh, it doesn't work. It is the %04x which does the truncation. %04.0f in contrast rounds. No idea whether this is documented/intended behavior. -- David Kastrup
David Kastrup wrote:
Hans Hagen
writes: Jonathan Sauer wrote:
Hello,
slightly more efficient ... (char and byte accept multiple arguments) and you can use write which is faster too) Thanks!
in context i use something quick and dirty (no > 0x10000 checking but from your function i can deduce the magic umbers -) Well, they are simply the ones mentioned in http://en.wikipedia.org/wiki/Utf-16. sure, but i didn't realize that a simple / worked ok; trunc/round stuff and so
Oh, it doesn't work. It is the %04x which does the truncation. %04.0f in contrast rounds.
No idea whether this is documented/intended behavior.
i wonder too (format is not that well documented, mostly a reference to some c function); last week i found out that %4s works but also %-4s which was just a guess when i tried it -) Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
Hello,
\pdfinfo{/Title(\directlua0{pdf.hexify<'my title'>})}
so <> instead of () as string delimiter
How does that work?
[...]
<hex sequence>
which again is utf16 but this time hex encoded (less efficient but so seldom used that it does not really matter)
Ah, so you meant: \pdfinfo{/Title<\directlua0{pdf.hexify('my title')}>} I got confused because the parameter to pdf.hexify was delimited by <> instead of ().
Hans
Jonathan
Hello, On Tue, Jul 01, 2008 at 11:31:15AM +0200, Hans Hagen wrote:
in pdf traditionally a string (that is, the ones that represented bookmarks and such) were in pdf doc encoding, so
(pdfdoc encoded string)
then they added utf16 support
(utf16bom followed by utf16 sequence
that's still strings. However, at some point another notation was introduced:
<hex sequence>
which again is utf16
or a string using PDFDocEncoding. The <> notation can be used anywhere, where a PDF string is expected regardless of the encoding.
but this time hex encoded (less efficient but so seldom used that it does not really matter)
Not to forget: Some characters inside (...) need to be escaped
(`\', unmatched `(' and `)', line ends, ...)
Yours sincerely
Heiko
Hello,
Not to forget: Some characters inside (...) need to be escaped (`\', unmatched `(' and `)', line ends, ...)
Of course (although according to the PDF spec, line ends need not be escaped): local char = unicode.utf8.char local write = tex.write function convertPDFstring(s) -- UTF-16 BOM write(char(0x110000 + 254, 0x110000 + 255)) -- The string for c in string.utfvalues(s) do -- Escape (, ) and \. Since the string is read before it is decoded, -- do not encode the escape sequence as UTF-16, but only escape the -- second byte of the UTF-16 byte pair if c == 40 then write(char(0x110000, 0x110000 + 92, 0x110000 + 40)) elseif c == 41 then write(char(0x110000, 0x110000 + 92, 0x110000 + 41)) elseif c == 92 then write(char(0x110000, 0x110000 + 92, 0x110000 + 92)) elseif c < 0x10000 then write(char(0x110000 + c / 256, 0x110000 + c % 256)) else c = c - 0x10000 local c1 = c / 1024 + 0xD800 local c2 = c % 1024 + 0xDC00 write(char( 0x110000 + c1 / 256, 0x110000 + c1 % 256, 0x110000 + c2 / 256, 0x110000 + c2 % 256)) end end end Jonathan
On Wed, Jul 02, 2008 at 09:55:33AM +0200, Jonathan Sauer wrote:
Not to forget: Some characters inside (...) need to be escaped (`\', unmatched `(' and `)', line ends, ...)
Of course (although according to the PDF spec, line ends need not be escaped):
| Within a literal string, the backslash (\) is used as an escape character
| for various purposes, such as to include newline characters, [...]
and
| If a string is too long to be conveniently placed on a single line, it may
| be split across multiple lines by using the backslash character at the end
| of a line to indicate that the string continues on the following line. The
| backslash and the end-of-line marker following it are not considered part
| of the string.
and
| If an end-of-line marker appears within a literal string without a
| preceding backslash, the result is equivalent to \n (regardless of whether
| the end-of-line marker was a carriage return, a line feed, or both).
Yours sincerely
Heiko
Hello,
Of course (although according to the PDF spec, line ends need not be escaped):
| Within a literal string, the backslash (\) is used as an escape | character for various purposes, such as to include newline characters, | [...]
But that does not mean that a newline *must* be escaped.
and
| If a string is too long to be conveniently placed on a single line, it | may be split across multiple lines by using the backslash character at | the end of a line to indicate that the string continues on the | following line. The backslash and the end-of-line marker following it | are not considered part of the string.
Yes, but as the newline is stripped from the string, this is just for convenience when creating a PDF document or parts of it in a text editor. Like the \<newline> combination in Java's resource bundles.
and
| If an end-of-line marker appears within a literal string without a | preceding backslash, the result is equivalent to \n (regardless of | whether the end-of-line marker was a carriage return, a line feed, or both).
You are correct; when not escaped, a newline might be changed. Although if the string is a text string (opposed to containing binary data), this should not matter.
Yours sincerely Heiko
Jonathan
On Wed, Jul 02, 2008 at 02:11:18PM +0200, Jonathan Sauer wrote:
Of course (although according to the PDF spec, line ends need not be escaped):
| If a string is too long to be conveniently placed on a single line, it | may be split across multiple lines by using the backslash character at | the end of a line to indicate that the string continues on the | following line. The backslash and the end-of-line marker following it | are not considered part of the string.
Yes, but as the newline is stripped from the string,
That the counterexample for "no needs for escaping".
| If an end-of-line marker appears within a literal string without a | preceding backslash, the result is equivalent to \n (regardless of | whether the end-of-line marker was a carriage return, a line feed, or both).
You are correct; when not escaped, a newline might be changed. Although if the string is a text string (opposed to containing binary data), this should not matter.
It does matter, <CR> and <LF> are different in bookmarks (only <LF> generates a line break in AR7/8).
Yours sincerely Heiko
| If an end-of-line marker appears within a literal string without a | preceding backslash, the result is equivalent to \n (regardless of | whether the end-of-line marker was a carriage return, a line feed, | or both).
You are correct; when not escaped, a newline might be changed. Although if the string is a text string (opposed to containing binary data),
Hello, this
should not matter.
It does matter, <CR> and <LF> are different in bookmarks (only <LF> generates a line break in AR7/8).
Oh. Well, by not escaping a newline in the string, it is always treated by the readers als <LF>, no matter if it is an <LF> or a <CR>. So it always generates a line break in bookmarks. IMO the desired behaviour.
Yours sincerely Heiko
Jonathan
On Thu, Jul 03, 2008 at 10:40:54AM +0200, Jonathan Sauer wrote:
Hello,
| If an end-of-line marker appears within a literal string without a | preceding backslash, the result is equivalent to \n (regardless of | whether the end-of-line marker was a carriage return, a line feed, | or both).
You are correct; when not escaped, a newline might be changed. Although if the string is a text string (opposed to containing binary data), this should not matter.
It does matter, <CR> and <LF> are different in bookmarks (only <LF> generates a line break in AR7/8).
Oh. Well, by not escaping a newline in the string, it is always treated by the readers als <LF>, no matter if it is an <LF> or a <CR>. So it always generates a line break in bookmarks. IMO the desired behaviour.
But <CR> or <CR><LF> are changed, and <BS><LF> are
removed (BS=backslash), ...
Therefore it isn't true that new lines don't need escaping in general.
It's a very dangerous business to neglect escaping needs.
Strings in the PDF format are used in many circumstances.
They can even have different semantics at different levels
(at low level parsing, strings are just a sequence of bytes or
see embedded file names, for example: They are sorted as
byte strings in a name tree and interpreted as file names.)
Yours sincerely
Heiko
participants (6)
-
David Kastrup
-
Hans Hagen
-
Heiko Oberdiek
-
Jonathan Sauer
-
Khaled Hosny
-
Taco Hoekwater