Patrick Gundlach wrote:
It is perhaps a lot of work included, but extending the contextgarden.net in such a way that users could provide test cases which would be typeset with different ConTeXt versions and PNGs compared afterwards ... could make it easier to discover any broken functionality.
It comes down to:
* different context versions needed
Once you are done with the fonts ... Different ConTeXt versions also seem to be useful for all kinds of stuff: - browsing (& comparing) old source files - running live.contextgarden.net on older versions (in case the behaviour in the new version is changed) - (maybe donloading old ConTeXt versions for whatever reason?) - modules.pdf - test suite
* different test documents needed
This can be done gradually. Of course, it is impossible to have a very good test suite, but at least some features can be tested; broken functionality would be noted much later otherwise (if ever). If you let the users add stuff, the suit will gradually grow. It would be much easier then to say "take a look: my document is OK with the version this-and-that, but since the version this-and-that something strange happens" when reporting a bug. (Perhaps testing documents should also have at least some very basic set of labels: very important to test, interesting test case, just temporary or one-time testing. Or at least some switch to leave the document there, but to remove it from the list of "test suite" when next ConTeXt version is out. 100 versions and 1000 documents ... can slow down your computer a bit.)
1) user can select any combination of the above 2) result (one page/png, more pages pdf) can be viewed or downloaded
Converting multiple-page pdf to png-s should also be possible. Take http://archive.contextgarden.net/thread/20050701.172657.13cd3fb5.html for example (the difference seen on the third page). But the number of pages or at least the number of document with a big number of pages has to be limited somehow.
3) feedback
Perhaps some labeling in case the results differ? - Label A: to inspect what's going wrong - Label B: it's ok, it's only a new feature, not supported before - Label C: ok, the bug was removed (some page inbetween differ) - Label D: ... Only the label A and perhaps some others would be interesting then ...
So we need test documents and after that context can mix in the pages from the known good document and the selecte version.
So you think that users actually download these comparisons?
The most important would be some graphical representation (different colors if a document doesn't compile or if it changes), but allowing the user to download a selected document compiled with a selected version should be possible.
Don't call me magic. I am currently trying to understand the tftopl, pltotf, vftovp and vptovf programs in detail. And I am so miserable at that; all kinds of optimizations that makes the code unreadable.
:)
Let me extend the suggestion, for we miss a ConTeXt test suite for a long time now:
- users send test cases - test cases get typeset with the actual ConTeXt and converted to PNG (like inline samples) - user "votes" if it looks right, if not adds comment - test case gets saved, including PNG, ConTeXt version and vote/comments - at next update, all test cases get typeset again, and users can "vote" if it looks still ok or at least the same (perhaps it would be possible to check automatically if the bitmaps are exactly identical)
Again, I doubt that users will actually do these kind of things. I could provide some interface for documents connected to version numbers, so you could download a set of .tex files (or one big tex file) and the related pdf file, but we need to collect good examples. And I think that testing is very hard: there are so many different things in ConTeXt that can be tested, so the pdf would result in a few hundred pages.
No, this wouldn't make any sense. I was thinking about PDF to PNG conversion and bitwise comparison of the files, which could be made automatically. Each time a new version would be uploaded, the documents would be compiled, any compilation errors caught and any image differences reported. Only if any of those two cases would be reported, the results would be inspected manually. Sorry Patrick :), Mojca