Hi Luigi,

On Thu, Mar 4, 2010 at 6:42 PM, luigi scarso <luigi.scarso@gmail.com> wrote:
On Thu, Mar 4, 2010 at 3:25 PM, James Fisher <jameshfisher@gmail.com> wrote:
> lol; I thought this might come up.  I have a couple of replies to that:
>
> (1) First and most important: I'm not suggesting that we use TeX to document
> things at all.  I'm suggesting that ConTeXt documentation should be
> accessible to newcomers in the same format as 99% of all other projects:
> good old HTML.
Today HTML is still crude for a typographer but things can change with WOFF.
You still can't show the potential of ConTeXt  with HTML, because main
output is pdf .


I completely understand that typographically, HTML is crude -- if it wasn't, I probably wouldn't be here at all; I'd write in HTML and print to PDF from a browser. But I think that's misunderstanding what 'the potential of ConTeXt' is.  ConTeXt was not created to produce documentation for ConTeXt.  People are not foolish enough to think, "if project X doesn't write its documentation in X, there can't be much else it can do".  You don't write Teach Yourself French in the French language.

(Also: WOFF will only help inasmuch as we can force quality typefaces on people (no improvements in e.g. line-breaking algorithms, microtypography, and what have you).  But that's off the issue.)
 
>On the web (which you are), HTML is king.
On a printing house( which I'm) , PDF is the king.


Ok, I said I'd put the HTML/PDF thing to rest, but I'll try and get my thoughts across again:
I found ConTeXt via the web.  Almost every single other software project I've ever found, I've found via the web.  I did not find ConTeXt via a printing house (perhaps others do; I'm getting the impression I'm a bit of an outlier in this community).  HTML is typographically crude, but, and this is important, *informationally*, HTML (and the web and friends) is far from crude.  The web is not a vast flat collection of PDFs.  It's the unchallenged superglue of the web, which is where I feel that the community should properly lie.  Now, it's quite possible that other people disagree with me here, and that I'm factually wrong -- for example if the ConTeXt community predominantly lies in the 'real-world', with gatherings, seminars, with handed-out printed leaflets and manuals, with overhead slide presentations -- in *that* case, then yes, PDF is king.
 
>TeX and PDFs are
> no replacement for the interconnected power of the web.  When I want a quick
> piece of information in <10 seconds, I do not want to consult a
> hand-collected folder of PDFs, or google for it and wait the age for a PDF
> to load.
I grep the code.
It works even offline and in less than 1 second.


Yes. But the web works (albeit only while online, but who is ever offline?) in less than a second too, and the web is far more than a 'World Wide Grep'.  It's an unimaginably vast cross-referenced semantically aware net with search engines of huge processing power.  Executing `grep interpretation of grave character *' unfortunately does not give quite the same result.
 
> That kind of feeling, I guess, is the reason that the
> contextgarden wiki exists.  But nor is Mediawiki is really not the most
> appropriate way to document a project.  Wikis are messy and unstructured.
> They don't lend themselves well to the hierarchical kind of structure
> appropriate for representing a codebase.  So I'm suggesting that ConTeXt be
> documented using a typical established documentation system.
I disagree.
minimals should be self-cointained.
a documentation system not done in  Context can introduce a useless dependency.

Anyway
even if there is already
http://foundry.supelec.fr/gf/project/modules/scmsvn/
(which is only usefula as testbed, not for documentation)

or if we will have something like cseq one day
(see
http://www.tug.org/utilities/plain/cseq.html, possible made in
automatic fashion from code base)


This looks lovely.
 

or a wiki book
(see
http://en.wikibooks.org/wiki/LaTeX
apropos of "Mediawiki is really not the most
appropriate way to document a project" )

it will be not enough --- a good starting point, of course.


In the end, one needs to understand the language, his semantic and
study the code.
With TeXBook, a couple of manuals from pragma (cont-en, metafun) and
the code you are ok
(well also ~1000 pages of pdf specs. are not bad and also  some book
about fonts ...).

Mmm, yes, you've made quite a lot of demands there on the curious programmer having stumbled across ConTeXt ...
 
Others are articles, and they are ok too.
TeX is a macro language. There are almost ~1000 macros , and maybe
~500 macros in ConTeXt.
Even if we are able to "documents" them in some manner, understanding
them and their relations
is a matter of study the code.


I don't think so.  The "just study the code" approach shows an awfully austere, reductionist philosophy.  Humans understand things from the top down.  It's the computers that work from the bottom up.
 

>> About model of development: one developer is not so strange afterall .
>
> I'm not sure what your point is here.  That user contribution leads to
> 'featuritis'?  I totally understand that being 'frozen' is not a bad thing;
> it effectively means 'having reached a state of perfection for the defined
> task' -- I don't think this has a connection with having one developer.
> More developers == faster rate of approach to the limit of perfection.

No, not necessarily and not in this situation.
For TeX frozen  means no new features, only  bugfixes;
it means that the language is maintained and backward compatibility is
very important.
(about 80% of scientific articles are in TeX, so backward
compatibility is really important) .
It doesn't mean that the language is perfect.
To me frozen simply says that "it's time to explore the semantic of
the language rather than
add new  features"


>
>>
>> This model doesn't imply that you cannot contribute to the code base
>> but only that all contributions need to be  validate (and possible
>> rejected) and integrate by developer,.
>> You can also contribute with third part modules, but they are not in
>> base code and in case of conflicts code base wins.
>>
>
> Sure thing -- revision control doesn't hinder that at all.  If Hans doesn't
> want to merge someone else's changes to his (authoritative) copy of the
> repo, then he doesn't have to.  DVCS != chaos.
One developer assure that there is exactly one version e no forks
(friendly or not).
This is also ok because there is no need for forks (afterall none are
thinking to fork LaTeX2e):

I think you're thinking of 'forking' as something dangerous (yeah, the word sounds painful), as something that will fragment the community, as something that destroys the concept of 'authority'.  It's really not.  Where you get forking you get merging at roughly the same rate.
 
> If Hans doesn't
> want to merge someone else's changes to his (authoritative) copy of the
> repo, then
the changes are rejected from the code base.

I'm not saying that a dcvs is useless for documentation or manuals.
But without contributors a dcvs can be practically useless,
and the only contributors for manuals actually are Taco for luatex and
Hans for Context mkiv.


Why are they the only contributors?
 



--
luigi
___________________________________________________________________________________
If your question is of interest to others as well, please add an entry to the Wiki!

maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context
webpage  : http://www.pragma-ade.nl / http://tex.aanhet.net
archive  : http://foundry.supelec.fr/projects/contextrev/
wiki     : http://contextgarden.net
___________________________________________________________________________________