"Hiding" columns in m-database & TABLE
Hi all, I'd like to process a csv file (with the database module) in order to typeset a (nice) TABLE. However, I have a few columns I'd like to omit. I can (of course) hand-edit the csv file; but is there a way to do it automatically? Something like \setupTABLE[column][3,4,5][kill] I did [empty=yes,width=0pt] in place of [kill], and it worked, but it seems like a hack (and I don't know whether it does actually process the cells I'm omitting - which in my case wouldn't bother me, but might be an additional layer of inelegance;)). And better ideas? Best, -- Marcin Borkowski http://octd.wmi.amu.edu.pl/en/Marcin_Borkowski Adam Mickiewicz University
On Thu, Nov 22, 2012 at 12:57 PM, Marcin Borkowski wrote:
le) in order to typeset a (nice) TABLE. However, I have a few columns I'd like to omit. I can (of course) hand-edit the csv file; but is there a way to do it automatically? Something like
\setupTABLE[column][3,4,5][kill]
I did [empty=yes,width=0pt] in place of [kill], and it worked, but it seems like a hack (and I don't know whether it does actually process the cells I'm omitting - which in my case wouldn't bother me, but might be an additional layer of inelegance;)). And better ideas?
If you have up to 9 columns, you could use \def\ProcessingLine#1#2#3#4#5#6#7{% \bTR\bTD#1\eTD\bTD#2\eTD\bTD#6\eTD\bTD#7\eTD} and then [command=\ProcessingLine] Mojca
Dnia 2012-11-22, o godz. 13:26:52
Mojca Miklavec
On Thu, Nov 22, 2012 at 12:57 PM, Marcin Borkowski wrote:
le) in order to typeset a (nice) TABLE. However, I have a few columns I'd like to omit. I can (of course) hand-edit the csv file; but is there a way to do it automatically? Something like
\setupTABLE[column][3,4,5][kill]
I did [empty=yes,width=0pt] in place of [kill], and it worked, but it seems like a hack (and I don't know whether it does actually process the cells I'm omitting - which in my case wouldn't bother me, but might be an additional layer of inelegance;)). And better ideas?
If you have up to 9 columns, you could use
\def\ProcessingLine#1#2#3#4#5#6#7{% \bTR\bTD#1\eTD\bTD#2\eTD\bTD#6\eTD\bTD#7\eTD}
and then [command=\ProcessingLine]
Well, something like 20 columns (on A4 landscape). ;) It turns out that my method somehow doesn't work well without setting also height=0pt; then it's fine, but I'm still wondering about a cleaner way.
Mojca
Best, -- Marcin Borkowski http://octd.wmi.amu.edu.pl/en/Marcin_Borkowski Adam Mickiewicz University
For example, easily such files are easily manipulated using awk.
awk "{print $1,$2,$3,$5,$7}" data.csv > interesting.csv
and this can be used in a pipeline...
Alan
On Thu, 22 Nov 2012 15:36:45 +0100
Marcin Borkowski
Dnia 2012-11-22, o godz. 13:26:52 Mojca Miklavec
napisał(a): On Thu, Nov 22, 2012 at 12:57 PM, Marcin Borkowski wrote:
le) in order to typeset a (nice) TABLE. However, I have a few columns I'd like to omit. I can (of course) hand-edit the csv file; but is there a way to do it automatically? Something like
\setupTABLE[column][3,4,5][kill]
I did [empty=yes,width=0pt] in place of [kill], and it worked, but it seems like a hack (and I don't know whether it does actually process the cells I'm omitting - which in my case wouldn't bother me, but might be an additional layer of inelegance;)). And better ideas?
If you have up to 9 columns, you could use
\def\ProcessingLine#1#2#3#4#5#6#7{% \bTR\bTD#1\eTD\bTD#2\eTD\bTD#6\eTD\bTD#7\eTD}
and then [command=\ProcessingLine]
Well, something like 20 columns (on A4 landscape). ;)
It turns out that my method somehow doesn't work well without setting also height=0pt; then it's fine, but I'm still wondering about a cleaner way.
Mojca
Best,
-- Alan Braslau CEA DSM-IRAMIS-SPEC CNRS URA 2464 Orme des Merisiers 91191 Gif-sur-Yvette cedex FRANCE tel: +33 1 69 08 73 15 fax: +33 1 69 08 87 86 mailto:alan.braslau@cea.fr
On 11/22/2012 3:36 PM, Marcin Borkowski wrote:
Dnia 2012-11-22, o godz. 13:26:52 Mojca Miklavec
napisał(a): On Thu, Nov 22, 2012 at 12:57 PM, Marcin Borkowski wrote:
le) in order to typeset a (nice) TABLE. However, I have a few columns I'd like to omit. I can (of course) hand-edit the csv file; but is there a way to do it automatically? Something like
\setupTABLE[column][3,4,5][kill]
I did [empty=yes,width=0pt] in place of [kill], and it worked, but it seems like a hack (and I don't know whether it does actually process the cells I'm omitting - which in my case wouldn't bother me, but might be an additional layer of inelegance;)). And better ideas?
If you have up to 9 columns, you could use
\def\ProcessingLine#1#2#3#4#5#6#7{% \bTR\bTD#1\eTD\bTD#2\eTD\bTD#6\eTD\bTD#7\eTD}
and then [command=\ProcessingLine]
Well, something like 20 columns (on A4 landscape). ;)
It turns out that my method somehow doesn't work well without setting also height=0pt; then it's fine, but I'm still wondering about a cleaner way.
I've added a splutter to the core: \startluacode local mycsvsplitter = utilities.parsers.csvsplitter { separator = ",", quote = '"', } local crap = [[ "1","2","3","4" "a","b","c","d" ]] local mycrap = mycsvsplitter(crap) context.bTABLE() for i=1,#mycrap do context.bTR() local c = mycrap[i] for i=1,#c do context.bTD() context(c[i]) context.eTD() end context.eTR() end context.eTABLE() \stopluacode ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | voip: 087 875 68 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
···
On 11/22/2012 3:36 PM, Marcin Borkowski wrote:
Dnia 2012-11-22, o godz. 13:26:52 Mojca Miklavec
napisał(a): On Thu, Nov 22, 2012 at 12:57 PM, Marcin Borkowski wrote:
le) in order to typeset a (nice) TABLE. However, I have a few columns I'd like to omit. I can (of course) hand-edit the csv file; but is there a way to do it automatically? Something like
\setupTABLE[column][3,4,5][kill]
I did [empty=yes,width=0pt] in place of [kill], and it worked, but it seems like a hack (and I don't know whether it does actually process the cells I'm omitting - which in my case wouldn't bother me, but might be an additional layer of inelegance;)). And better ideas?
If you have up to 9 columns, you could use
\def\ProcessingLine#1#2#3#4#5#6#7{% \bTR\bTD#1\eTD\bTD#2\eTD\bTD#6\eTD\bTD#7\eTD}
and then [command=\ProcessingLine]
Well, something like 20 columns (on A4 landscape). ;)
It turns out that my method somehow doesn't work well without setting also height=0pt; then it's fine, but I'm still wondering about a cleaner way.
I've added a splutter to the core:
\startluacode
local mycsvsplitter = utilities.parsers.csvsplitter { separator = ",", quote = '"', }
local crap = [[ "1","2","3","4" "a","b","c","d" ]]
local mycrap = mycsvsplitter(crap)
context.bTABLE() for i=1,#mycrap do context.bTR() local c = mycrap[i] for i=1,#c do context.bTD() context(c[i]) context.eTD() end context.eTR() end context.eTABLE()
\stopluacode
Hi Hans, although csv is not a standard per se there is nevertheless an rfc: http://tools.ietf.org/html/rfc4180. Can we have an optional rfc-compliant parser as well? That entails interpreting the first line as field header if it consists entirely of unquoted fields -- neat feature as one can treat these as column identifiers to query fields in a more natural fashion. ································································· \starttext \startluacode local P, R, V, S, C, Ct, Cs, Carg, Cc, Cg, Cf = lpeg.P, lpeg.R, lpeg.V, lpeg.S, lpeg.C, lpeg.Ct, lpeg.Cs, lpeg.Carg, lpeg.Cc, lpeg.Cg, lpeg.Cf local lpegmatch = lpeg.match local patterns = lpeg.patterns local newline = patterns.newline ----------------------------------------------------------------------- -- RFC 4180 csv parser ----------------------------------------------------------------------- -- based on http://tools.ietf.org/html/rfc4180 -- notable deviations from the RFC grammar: -- · allow overriding “comma” and “quote” chars in spec -- · the “textdata” rule is extrapolated to mean, basically, -- “everything but the quote character”, not just the character -- range as in the RFC -- · newline from l-lpeg.lua instead of “crlf” local rfc4180_spec = { separator = ",", quote = [["]] } utilities.parsers.rfc4180_splitter = function (specification) specification = specification or rfc4180_spec local separator = specification.separator --> rfc: COMMA local quotechar = P(specification.quote) --> DQUOTE local dquotechar = quotechar * quotechar --> 2DQUOTE / specification.quote local separator = S(separator ~= "" and separator or ",") local escaped = quotechar * Cs((dquotechar + (1 - quotechar))^0) * quotechar local non_escaped = C((1 - quotechar - newline - separator)^1) local field = escaped + non_escaped local record = Ct(field * (separator * field)^0) local name = field -- sic! local header = Cg(Ct(name * (separator * name)^0), "header") local file = Ct((header * newline)^-1 -- optional header * record * (newline * record)^0 * newline^0) return function (data) return lpegmatch (file, data) end end ----------------------------------------------------------------------- -- example writer (natural table) ----------------------------------------------------------------------- local print_csv_table = function (tab) local header = tab.header context.setupTABLE({ frame = "off" }) context.setupTABLE({"header"}, { background = "screen" }) context.bTABLE() if header then context.bTABLEhead() context.bTR() for i=1,#header do context.bTH() context(header[i]) context.eTH() end context.eTR() context.eTABLEhead() end context.bTABLEbody() for i=1,#tab do context.bTR() local c = tab[i] for i=1,#c do context.bTD() context(c[i]) context.eTD() end context.eTR() end context.eTABLEbody() context.eTABLE() end ----------------------------------------------------------------------- -- usage ----------------------------------------------------------------------- local mycsvsplitter = utilities.parsers.rfc4180_splitter () local crap = [[ first,second,third,fourth "1","2","3","4" "a","b","c","d" "foo","bar""baz","boogie","xyzzy" ]] local mycrap = mycsvsplitter(crap) print_csv_table (mycrap) ----------------------------------------------------------------------- \stopluacode \stoptext
On 11/23/2012 1:38 PM, Philipp Gesang wrote:
although csv is not a standard per se there is nevertheless an rfc: http://tools.ietf.org/html/rfc4180. Can we have an optional rfc-compliant parser as well? That entails interpreting the first line as field header if it consists entirely of unquoted fields -- neat feature as one can treat these as column identifiers to query fields in a more natural fashion.
i patched your version a bit: local mycsvsplitter = utilities.parsers.rfc4180splitter() local crap = [[ first,second,third,fourth "1","2","3","4" "a","b","c","d" "foo","bar""baz","boogie","xyzzy" ]] -- local list, names = mycsvsplitter(crap,true) -- local list = mycsvsplitter(crap) the flag tells to return a header Hans ----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | voip: 087 875 68 74 | www.pragma-ade.com | www.pragma-pod.nl -----------------------------------------------------------------
participants (5)
-
Alan BRASLAU
-
Hans Hagen
-
Marcin Borkowski
-
Mojca Miklavec
-
Philipp Gesang