Re: [NTG-context] Occasional words sticking out from flush-right
On 03.03.2010 22:41, ntg-context-request@ntg.nl wrote:
Stymies me how people on this mailing list know this stuff -- even a Google search for "setbreakpoints", assuming I knew the command in advance, returns nada.
This all is sacred knowledge, for devoted seekers :o) (Arthur, what about your church of TeX?) Vyatcheslav
Well, it's reassuring that people can at least admit this is a closed community. (But aren't churches meant to evangelize?) ---- "For using ConTEXt, no TEX-- programming skills and no technical background are needed." (http://wiki.contextgarden.net/What_is_ConTeXt) "So why don't you grep in base/* ?" (Luigi; I appreciate the advice but a bit of a contradiction methinks) ---- Also, re "there is only one ConTeXt developer --- Hans Hagen": I'd suggest a few reasons for this are: (1) in order to develop on a project, you first need a the high-level appreciation of the system that comes from documentation (2) ConTeXt does not have any revision control system that I can see (the only source code browser seem to be http://source.contextgarden.net/ which looks entirely custom); all I can find is the SVN of the in-progress manual (3) The low-level macro documentation at http://texshow.contextgarden.net/is a start, but: (i) instead of a custom system with basic editing, a modern documentation system (I'm thinking of http://sphinx.pocoo.org/ used for the *fantastic* documentation of the Python library) would be more productive, and (ii) this documentation is completely non-structured, being just an alphabetical list. (This from the community that came up with "literate programming"?) Also, just having one developer is not at all anything to celebrate, and no, this model of development is not OK. I wouldn't say it's a model for development at all. Other projects manage just fine without naming conflicts. Admittedly this is with the amazingly obvious concept of namespacing, which TeX doesn't have -- though I've just been reading an article http://www.tug.org/TUGboat/Articles/tb27-0/neugebauer.pdf on namespacing in http://www.extex.org/. James On Wed, Mar 3, 2010 at 9:53 PM, Arthur Reutenauer < arthur.reutenauer@normalesup.org> wrote:
(Arthur, what about your church of TeX?)
I deny everything.
Arthur
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Wed, 3 Mar 2010, James Fisher wrote:
Also, re "there is only one ConTeXt developer --- Hans Hagen": I'd suggest a few reasons for this are: (1) in order to develop on a project, you first need a the high-level appreciation of the system that comes from documentation
MkII is fairly well documented. See http://wiki.contextgarden.net/Official_ConTeXt_Documentation MkIV is only documented at http://www.pragma-ade.com/general/manuals/mk.pdf. Part of the reason is that it is still changing. The documentation is not perfect, but is huge (more than 1000 pages last time I checked). Saying that ConTeXt is undocumented in not fair, IMO.
(2) ConTeXt does not have any revision control system that I can see (the only source code browser seem to be http://source.contextgarden.net/ which looks entirely custom); all I can find is the SVN of the in-progress manual
git clone http://dl.contextgarden.net/distribution/git/ Hans does not use a public version control system. The above repository is a daily snapshot of ConTeXt files.
(3) The low-level macro documentation at http://texshow.contextgarden.net/is a start, but: (i) instead of a custom system with basic editing, a modern documentation system (I'm thinking of http://sphinx.pocoo.org/ used for the *fantastic* documentation of the Python library) would be more productive, and (ii) this documentation is completely non-structured, being just an alphabetical list.
(This from the community that came up with "literate programming"?)
The sources are fairly well documented. Just read the source files, or see http://foundry.supelec.fr/gf/project/modules/ for PDF output. The question of documentation has come up many times in the past. Everytime we conclude that we need a volunteer to do maintain the documentation, but so far no one has stepped forward (hint, hint). Aditya
Right, to show I'm not just empty words, I've just spent ~90 minutes
preparing the beginnings of some decent documentation. Presenting
http://github.com/eegg/ConTeXt-doc : basically, I've:
(1) wget'ed all the English HTML from the texshow documentation
(2) converted it all to reStructuredText using html2rest.py (
http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py)
(3) plugged the result into a fresh installation of the Sphinx documentation
system
(4) Pushed the whole thing to a new github repo (including generated HTML so
you can take a look without bothering to install Sphinx)
To note:
- Sphinx really is state-of-the-art. I suggest you spend a few minutes
browsing http://docs.python.org/ to see what I think is 'good
documentation.' It runs on reStructuredText, a powerful, purely semantic
and readable (almost invisible) markup.
- Revision control, people! I strongly encourage everyone to fork and push
this repository.
- There's a hella lot of documentation to do here. Most of the pages in
texshow are just placeholders. There's also massive capabilities in
something like Sphinx to organize the code documentation with sensible
commentaries.
- In my humble opinion, TeXies need to get out of the habit of
'self-documenting' TeX using TeX itself. TeX is not some replacement for
all markup, it's for producing beautiful books (OK, and some presentations);
in any case, this habit smacks of introversion.
To address previous points in this thread:
- Maybe I exaggerated a tad on how little documentation there is.
- Why on earth is there a git repository that is just slave storage? That
uses about 1% of its capabilities; it seems a terrible waste.
So, thoughts?
James
On Thu, Mar 4, 2010 at 12:08 AM, Aditya Mahajan
On Wed, 3 Mar 2010, James Fisher wrote:
Also, re "there is only one ConTeXt developer --- Hans Hagen":
I'd suggest a few reasons for this are: (1) in order to develop on a project, you first need a the high-level appreciation of the system that comes from documentation
MkII is fairly well documented. See http://wiki.contextgarden.net/Official_ConTeXt_Documentation
MkIV is only documented at http://www.pragma-ade.com/general/manuals/mk.pdf. Part of the reason is that it is still changing.
The documentation is not perfect, but is huge (more than 1000 pages last time I checked). Saying that ConTeXt is undocumented in not fair, IMO.
(2) ConTeXt does not have any revision control system that I can see (the
only source code browser seem to be http://source.contextgarden.net/which looks entirely custom); all I can find is the SVN of the in-progress manual
git clone http://dl.contextgarden.net/distribution/git/
Hans does not use a public version control system. The above repository is a daily snapshot of ConTeXt files.
(3) The low-level macro documentation at
http://texshow.contextgarden.net/is a start, but: (i) instead of a custom system with basic editing, a modern documentation system (I'm thinking of http://sphinx.pocoo.org/ used for the *fantastic* documentation of the Python library) would be more productive, and (ii) this documentation is completely non-structured, being just an alphabetical list.
(This from the community that came up with "literate programming"?)
The sources are fairly well documented. Just read the source files, or see http://foundry.supelec.fr/gf/project/modules/ for PDF output.
The question of documentation has come up many times in the past. Everytime we conclude that we need a volunteer to do maintain the documentation, but so far no one has stepped forward (hint, hint).
Aditya
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, 4 Mar 2010, James Fisher wrote:
Right, to show I'm not just empty words, I've just spent ~90 minutes preparing the beginnings of some decent documentation. Presenting http://github.com/eegg/ConTeXt-doc : basically, I've:
Interesting.
(2) converted it all to reStructuredText using html2rest.py ( http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py)
The values in texwebshow are generated from xml files http://source.contextgarden.net/tex/context/interface/cont-en.xml
- There's a hella lot of documentation to do here. Most of the pages in texshow are just placeholders. There's also massive capabilities in something like Sphinx to organize the code documentation with sensible commentaries.
Someone will still need to *write* the details. That has been the biggest bane of ConTeXt documentation. Almost all documentation is written by Hans and Taco and currently they want to focus on development and advanced documentation, and not converting all documentation to an organized html.
- In my humble opinion, TeXies need to get out of the habit of 'self-documenting' TeX using TeX itself. TeX is not some replacement for all markup, it's for producing beautiful books (OK, and some presentations); in any case, this habit smacks of introversion.
In this case it is not a question of markup, but of the output format, and whether the source and the documentation are in sync or not. Basically, context sources are documented as %D documentation ... \tex code %D documentation \tex code In principle, we can replace the markup in the documentation to xml or an ascii markup. It is easy enough to extract the %D lines and post-process them by any tool that you like. The biggest advantage of using a pdf output is that we can show the output of code snippets. For example, \startbuffer some tex code \stopbuffer \typebuffer gives \getbuffer thereby ensuring that the documentation is showing the correct behavior. To do this in html requires additional context run, converting the output to png, and displaying the png (this is how the wiki treats <context> ... </context> tags).
- Why on earth is there a git repository that is just slave storage? That uses about 1% of its capabilities; it seems a terrible waste.
Because ConTeXt has only 1 main developer :-) Aditya
Hi Aditya,
On Thu, Mar 4, 2010 at 4:06 AM, Aditya Mahajan
On Thu, 4 Mar 2010, James Fisher wrote:
Right, to show I'm not just empty words, I've just spent ~90 minutes
preparing the beginnings of some decent documentation. Presenting http://github.com/eegg/ConTeXt-doc : basically, I've:
Interesting.
(2) converted it all to reStructuredText using html2rest.py (
http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py)
The values in texwebshow are generated from xml files http://source.contextgarden.net/tex/context/interface/cont-en.xml
Well now, that's interesting. May I ask where that XML itself comes from? Is it hand-maintained by Hans/Taco/Patrick?
- There's a hella lot of documentation to do here. Most of the pages in
texshow are just placeholders. There's also massive capabilities in something like Sphinx to organize the code documentation with sensible commentaries.
Someone will still need to *write* the details. That has been the biggest bane of ConTeXt documentation. Almost all documentation is written by Hans and Taco and currently they want to focus on development and advanced documentation, and not converting all documentation to an organized html.
Of course. So before people offer to write documentation, the barriers to it being written have to be lowered. No sane person wants to (read: *I* don't want to) hand-maintain one massive XML file.
- In my humble opinion, TeXies need to get out of the habit of
'self-documenting' TeX using TeX itself. TeX is not some replacement for all markup, it's for producing beautiful books (OK, and some presentations); in any case, this habit smacks of introversion.
In this case it is not a question of markup, but of the output format, and whether the source and the documentation are in sync or not. Basically, context sources are documented as
%D documentation ...
\tex code
%D documentation
\tex code
In principle, we can replace the markup in the documentation to xml or an ascii markup. It is easy enough to extract the %D lines and post-process them by any tool that you like. The biggest advantage of using a pdf output is that we can show the output of code snippets. For example,
\startbuffer some tex code \stopbuffer
\typebuffer
gives
\getbuffer
thereby ensuring that the documentation is showing the correct behavior. To do this in html requires additional context run, converting the output to png, and displaying the png (this is how the wiki treats <context> ... </context> tags).
That is also something to think about. But I don't think it's really a serious problem -- the Mediawiki <context> works well enough. In terms of user-friendliness I would say it works better than in a massive PDF -- I would rather consult an image on the web. It wouldn't be too hard to alter Sphinx (as a for example; I suggest Sphinx so we can talk concretely) so that all TeX-markupped code is shown side-by-side as [ syntax-highlighted code | ConTeXt output as PNG ]. (This would be an improvement on the wiki implementation where the TeX code is duplicated in the source.)
- Why on earth is there a git repository that is just slave storage? That
uses about 1% of its capabilities; it seems a terrible waste.
Because ConTeXt has only 1 main developer :-)
Again I smell circular reasoning :) ... I suppose at this point I want to ask Hans personally: is cutting everyone else out from the workflow a design decision?
Aditya
p.s.; I've been updating documentation of 'Enumerations' in the git repo -- I've chosen to develop a little patch of code as an example of what documentation code be across the board. Best, James
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, 4 Mar 2010, James Fisher wrote:
On Thu, Mar 4, 2010 at 4:06 AM, Aditya Mahajan
wrote: On Thu, 4 Mar 2010, James Fisher wrote:
(2) converted it all to reStructuredText using html2rest.py (
http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py)
The values in texwebshow are generated from xml files http://source.contextgarden.net/tex/context/interface/cont-en.xml
Well now, that's interesting. May I ask where that XML itself comes from? Is it hand-maintained by Hans/Taco/Patrick?
It is hand maintained. Ideally, whenever someone suggests an enhancement, they should also send an update for the interface files.
- In my humble opinion, TeXies need to get out of the habit of
'self-documenting' TeX using TeX itself. TeX is not some replacement for all markup, it's for producing beautiful books (OK, and some presentations); in any case, this habit smacks of introversion.
In this case it is not a question of markup, but of the output format, and whether the source and the documentation are in sync or not. Basically, context sources are documented as
%D documentation ...
\tex code
%D documentation
\tex code
In principle, we can replace the markup in the documentation to xml or an ascii markup. It is easy enough to extract the %D lines and post-process them by any tool that you like. The biggest advantage of using a pdf output is that we can show the output of code snippets. For example,
\startbuffer some tex code \stopbuffer
\typebuffer
gives
\getbuffer
thereby ensuring that the documentation is showing the correct behavior. To do this in html requires additional context run, converting the output to png, and displaying the png (this is how the wiki treats <context> ... </context> tags).
That is also something to think about. But I don't think it's really a serious problem -- the Mediawiki <context> works well enough. In terms of user-friendliness I would say it works better than in a massive PDF -- I would rather consult an image on the web.
I personally prefer a massive PDF to a massive HTML with lots of images. With pdf you can also *search* the output. A perfect solution will be to generate both outputs from a single source, but that means a custom made solution.
It wouldn't be too hard to alter Sphinx (as a for example; I suggest Sphinx so we can talk concretely) so that all TeX-markupped code is shown side-by-side as [ syntax-highlighted code | ConTeXt output as PNG ]. (This would be an improvement on the wiki implementation where the TeX code is duplicated in the source.)
This is what wiki does. <context source="yes"> shows both the source and the output side by side. This was a later edition, so there is still code that duplicates the source in <texcode> and <context> Aditya
On Thu, Mar 4, 2010 at 6:11 PM, Aditya Mahajan
I personally prefer a massive PDF to a massive HTML with lots of images. With pdf you can also *search* the output. A perfect solution will be to generate both outputs from a single source, but that means a custom made solution. Doable with luatex.
-- luigi
On Thu, 4 Mar 2010, luigi scarso wrote:
On Thu, Mar 4, 2010 at 6:11 PM, Aditya Mahajan
wrote: I personally prefer a massive PDF to a massive HTML with lots of images. With pdf you can also *search* the output. A perfect solution will be to generate both outputs from a single source, but that means a custom made solution. Doable with luatex.
That defeats the whole point of what James is suggesting. Use an existing, feature rich system for source documentation rather than rolling out your own. Aditya
On Thu, Mar 4, 2010 at 6:30 PM, Aditya Mahajan
On Thu, 4 Mar 2010, luigi scarso wrote:
On Thu, Mar 4, 2010 at 6:11 PM, Aditya Mahajan
wrote: I personally prefer a massive PDF to a massive HTML with lots of images. With pdf you can also *search* the output. A perfect solution will be to generate both outputs from a single source, but that means a custom made solution.
Doable with luatex.
That defeats the whole point of what James is suggesting. Use an existing, feature rich system for source documentation rather than rolling out your own. yes
-- luigi
On Thu, Mar 4, 2010 at 5:11 PM, Aditya Mahajan
On Thu, 4 Mar 2010, James Fisher wrote:
On Thu, Mar 4, 2010 at 4:06 AM, Aditya Mahajan
wrote: On Thu, 4 Mar 2010, James Fisher wrote:
(2) converted it all to reStructuredText using html2rest.py (
http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py)
The values in texwebshow are generated from xml files http://source.contextgarden.net/tex/context/interface/cont-en.xml
Well now, that's interesting. May I ask where that XML itself comes
from? Is it hand-maintained by Hans/Taco/Patrick?
It is hand maintained. Ideally, whenever someone suggests an enhancement, they should also send an update for the interface files.
Ouch.
- In my humble opinion, TeXies need to get out of the habit of
'self-documenting' TeX using TeX itself. TeX is not some replacement for all markup, it's for producing beautiful books (OK, and some presentations); in any case, this habit smacks of introversion.
In this case it is not a question of markup, but of the output format, and whether the source and the documentation are in sync or not. Basically, context sources are documented as
%D documentation ...
\tex code
%D documentation
\tex code
In principle, we can replace the markup in the documentation to xml or an ascii markup. It is easy enough to extract the %D lines and post-process them by any tool that you like. The biggest advantage of using a pdf output is that we can show the output of code snippets. For example,
\startbuffer some tex code \stopbuffer
\typebuffer
gives
\getbuffer
thereby ensuring that the documentation is showing the correct behavior. To do this in html requires additional context run, converting the output to png, and displaying the png (this is how the wiki treats <context> ... </context> tags).
That is also something to think about. But I don't think it's really a
serious problem -- the Mediawiki <context> works well enough. In terms of user-friendliness I would say it works better than in a massive PDF -- I would rather consult an image on the web.
I personally prefer a massive PDF to a massive HTML with lots of images. With pdf you can also *search* the output. A perfect solution will be to generate both outputs from a single source, but that means a custom made solution.
I'll put the PDF vs. HTML argument to rest :) ... suffice to say that I thoroughly agree a semantic single-source solution with multiple outputs is highly desirable. I've just two pieces of guidance on the roads not to go down: (1) XML isn't a great solution because, while it's purely semantic, extensible, easily parseable, and all the rest of it, it is *horrible* to look at and maintain (2) TeX isn't a great solution because of its curious property that it is only really parseable by TeX itself ... none of the "tex-to-<whatever>" attempts that I've seen are a viable option IMO.
It wouldn't be too hard to alter Sphinx (as a for example; I suggest
Sphinx so we can talk concretely) so that all TeX-markupped code is shown side-by-side as [ syntax-highlighted code | ConTeXt output as PNG ]. (This would be an improvement on the wiki implementation where the TeX code is duplicated in the source.)
This is what wiki does. <context source="yes"> shows both the source and the output side by side. This was a later edition, so there is still code that duplicates the source in <texcode> and <context>
Duly noted. I guess I've just happened to only see the latter.
Aditya
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, Mar 4, 2010 at 3:35 AM, James Fisher
- In my humble opinion, TeXies need to get out of the habit of 'self-documenting' TeX using TeX itself. TeX is not some replacement for all markup, it's for producing beautiful books (OK, and some presentations); I think that "self-documenting" in TeX is 20year olds now --- it started with Latex209 ,I believe.
So, thoughts? Yes from http://sphinx.pocoo.org/ "Sphinx is a tool that makes it easy to create intelligent and beautiful documentation" but I believe that ConTeXt is better
" * Output formats: HTML (including Windows HTML Help) and LaTeX, for printable PDF versions" Are you suggesting to use LaTeX to document ConTeXt source ? About model of development: one developer is not so strange afterall . In other situations maybe this is not adequate, in this situation actually it's the best choice (where for my experience "actually" goes from 10year ago until now). For example mkii is frozen while mkiv is at 50%, if we consider that luatex 0.50 is at 50%, and luatex 1.0 will be 100%: btw mkiv is really usable, not in some fuzzy alpha state (frozen is not a bad word : tex is frozen from ~1990, pdftex is "cold", ie changes a little, luatex is "hot") This model doesn't imply that you cannot contribute to the code base but only that all contributions need to be validate (and possible rejected) and integrate by developer,. You can also contribute with third part modules, but they are not in base code and in case of conflicts code base wins. There is no need for a public dcvs : for mkiv there is always one beta version, the last one. Errors will be fixed in next beta. This imply that you must be prepared to patch your macros/stylesheets to match with last version Patrick thinks that a public git is a good idea and me too, but one can always manage his personal dcvs --- which is a good idea to understand code evolution on a particularly subject (I believe the Arthur has an historical archive ) For comparison, luatex project is developed in "traditional" manner: svn, bug tracker, manual (in context mkii ): the code base is in C with target CWEB . You can think at luatex as low-level layer which development is driven by mkiv, a very high level layer, which development is influenced by luatex itself (a sort of negative feedback see http://en.wikipedia.org/wiki/Control_theory) As I said the language and its semantic are particularly , almost unique. Nothing strange that there is an ad hoc model of development -- luigi
On Thu, Mar 4, 2010 at 7:10 AM, luigi scarso
On Thu, Mar 4, 2010 at 3:35 AM, James Fisher
wrote: - In my humble opinion, TeXies need to get out of the habit of 'self-documenting' TeX using TeX itself. TeX is not some replacement for all markup, it's for producing beautiful books (OK, and some presentations); I think that "self-documenting" in TeX is 20year olds now --- it started with Latex209 ,I believe.
So, thoughts? Yes from http://sphinx.pocoo.org/ "Sphinx is a tool that makes it easy to create intelligent and beautiful documentation" but I believe that ConTeXt is better
" * Output formats: HTML (including Windows HTML Help) and LaTeX, for printable PDF versions" Are you suggesting to use LaTeX to document ConTeXt source ?
lol; I thought this might come up. I have a couple of replies to that: (1) First and most important: I'm not suggesting that we use TeX to document things at all. I'm suggesting that ConTeXt documentation should be accessible to newcomers in the same format as 99% of all other projects: good old HTML. On the web (which you are), HTML is king. TeX and PDFs are no replacement for the interconnected power of the web. When I want a quick piece of information in <10 seconds, I do not want to consult a hand-collected folder of PDFs, or google for it and wait the age for a PDF to load. That kind of feeling, I guess, is the reason that the contextgarden wiki exists. But nor is Mediawiki is really not the most appropriate way to document a project. Wikis are messy and unstructured. They don't lend themselves well to the hierarchical kind of structure appropriate for representing a codebase. So I'm suggesting that ConTeXt be documented using a typical established documentation system. (2) The docutils codebase (which manages reStructuredText) is modularized extremely well. Output formats can be written with a minimum of effort. The docutils document tree looks a lot like XML, and as such making ConTeXt output possible is just doing the standard XML-to-TeX conversion. I have in fact, while using ConTeXt, been writing a crude docutils ConTeXt writer (though quite a way to go).
About model of development: one developer is not so strange afterall .
In other situations maybe this is not adequate, in this situation actually it's the best choice (where for my experience "actually" goes from 10year ago until now).
For example mkii is frozen while mkiv is at 50%, if we consider that luatex 0.50 is at 50%, and luatex 1.0 will be 100%: btw mkiv is really usable, not in some fuzzy alpha state (frozen is not a bad word : tex is frozen from ~1990, pdftex is "cold", ie changes a little, luatex is "hot")
I'm not sure what your point is here. That user contribution leads to 'featuritis'? I totally understand that being 'frozen' is not a bad thing; it effectively means 'having reached a state of perfection for the defined task' -- I don't think this has a connection with having one developer. More developers == faster rate of approach to the limit of perfection.
This model doesn't imply that you cannot contribute to the code base but only that all contributions need to be validate (and possible rejected) and integrate by developer,. You can also contribute with third part modules, but they are not in base code and in case of conflicts code base wins.
Sure thing -- revision control doesn't hinder that at all. If Hans doesn't want to merge someone else's changes to his (authoritative) copy of the repo, then he doesn't have to. DVCS != chaos.
There is no need for a public dcvs : for mkiv there is always one beta version, the last one. Errors will be fixed in next beta. This imply that you must be prepared to patch your macros/stylesheets to match with last version
This sounds circular to me: there's always one beta version *because* there's no revision control.
Patrick thinks that a public git is a good idea and me too, but one can always manage his personal dcvs --- which is a good idea to understand code evolution on a particularly subject (I believe the Arthur has an historical archive )
Sure, I do it for my pithy projects. All that I've learned is that I could even less do without it if my projects were large, like ConTeXt.
For comparison, luatex project is developed in "traditional" manner: svn, bug tracker, manual (in context mkii ): the code base is in C with target CWEB .
Mm, well, I kinda have the same opinions towards luatex documentation.
You can think at luatex as low-level layer which development is driven by mkiv, a very high level layer, which development is influenced by luatex itself (a sort of negative feedback see http:// I understand that concern,en.wikipedia.org/wiki/Control_theoryhttp://en.wikipedia.org/wiki/Control_theory )
Again not sure what your point is -- LuaTeX and MKIV influence each other, so ..
As I said the language and its semantic are particularly , almost unique. Nothing strange that there is an ad hoc model of development
I don't know; I think the ties between language/semantics and the development model are pretty thin. I suspect that in the TeX community the characteristic of both comes from Knuth's character (extremely conservative and controlling, 'the codebase will stand with its flaws for all eternity for better or worse' kinda thing) rather than being an in-principle connection.
-- luigi
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, Mar 4, 2010 at 3:25 PM, James Fisher
lol; I thought this might come up. I have a couple of replies to that:
(1) First and most important: I'm not suggesting that we use TeX to document things at all. I'm suggesting that ConTeXt documentation should be accessible to newcomers in the same format as 99% of all other projects: good old HTML. Today HTML is still crude for a typographer but things can change with WOFF. You still can't show the potential of ConTeXt with HTML, because main output is pdf .
On the web (which you are), HTML is king. On a printing house( which I'm) , PDF is the king.
TeX and PDFs are no replacement for the interconnected power of the web. When I want a quick piece of information in <10 seconds, I do not want to consult a hand-collected folder of PDFs, or google for it and wait the age for a PDF to load. I grep the code. It works even offline and in less than 1 second.
That kind of feeling, I guess, is the reason that the contextgarden wiki exists. But nor is Mediawiki is really not the most appropriate way to document a project. Wikis are messy and unstructured. They don't lend themselves well to the hierarchical kind of structure appropriate for representing a codebase. So I'm suggesting that ConTeXt be documented using a typical established documentation system. I disagree. minimals should be self-cointained. a documentation system not done in Context can introduce a useless dependency.
Anyway even if there is already http://foundry.supelec.fr/gf/project/modules/scmsvn/ (which is only usefula as testbed, not for documentation) or if we will have something like cseq one day (see http://www.tug.org/utilities/plain/cseq.html, possible made in automatic fashion from code base) or a wiki book (see http://en.wikibooks.org/wiki/LaTeX apropos of "Mediawiki is really not the most appropriate way to document a project" ) it will be not enough --- a good starting point, of course. In the end, one needs to understand the language, his semantic and study the code. With TeXBook, a couple of manuals from pragma (cont-en, metafun) and the code you are ok (well also ~1000 pages of pdf specs. are not bad and also some book about fonts ...). Others are articles, and they are ok too. TeX is a macro language. There are almost ~1000 macros , and maybe ~500 macros in ConTeXt. Even if we are able to "documents" them in some manner, understanding them and their relations is a matter of study the code.
About model of development: one developer is not so strange afterall .
I'm not sure what your point is here. That user contribution leads to 'featuritis'? I totally understand that being 'frozen' is not a bad thing; it effectively means 'having reached a state of perfection for the defined task' -- I don't think this has a connection with having one developer. More developers == faster rate of approach to the limit of perfection.
No, not necessarily and not in this situation. For TeX frozen means no new features, only bugfixes; it means that the language is maintained and backward compatibility is very important. (about 80% of scientific articles are in TeX, so backward compatibility is really important) . It doesn't mean that the language is perfect. To me frozen simply says that "it's time to explore the semantic of the language rather than add new features"
This model doesn't imply that you cannot contribute to the code base but only that all contributions need to be validate (and possible rejected) and integrate by developer,. You can also contribute with third part modules, but they are not in base code and in case of conflicts code base wins.
Sure thing -- revision control doesn't hinder that at all. If Hans doesn't want to merge someone else's changes to his (authoritative) copy of the repo, then he doesn't have to. DVCS != chaos.
If Hans doesn't want to merge someone else's changes to his (authoritative) copy of the repo, then
One developer assure that there is exactly one version e no forks (friendly or not). This is also ok because there is no need for forks (afterall none are thinking to fork LaTeX2e): the changes are rejected from the code base. I'm not saying that a dcvs is useless for documentation or manuals. But without contributors a dcvs can be practically useless, and the only contributors for manuals actually are Taco for luatex and Hans for Context mkiv. -- luigi
Hi Luigi,
On Thu, Mar 4, 2010 at 6:42 PM, luigi scarso
On Thu, Mar 4, 2010 at 3:25 PM, James Fisher
wrote: lol; I thought this might come up. I have a couple of replies to that:
(1) First and most important: I'm not suggesting that we use TeX to document things at all. I'm suggesting that ConTeXt documentation should be accessible to newcomers in the same format as 99% of all other projects: good old HTML. Today HTML is still crude for a typographer but things can change with WOFF. You still can't show the potential of ConTeXt with HTML, because main output is pdf .
I completely understand that typographically, HTML is crude -- if it wasn't, I probably wouldn't be here at all; I'd write in HTML and print to PDF from a browser. But I think that's misunderstanding what 'the potential of ConTeXt' is. ConTeXt was not created to produce documentation for ConTeXt. People are not foolish enough to think, "if project X doesn't write its documentation in X, there can't be much else it can do". You don't write Teach Yourself French in the French language. (Also: WOFF will only help inasmuch as we can force quality typefaces on people (no improvements in e.g. line-breaking algorithms, microtypography, and what have you). But that's off the issue.)
On the web (which you are), HTML is king. On a printing house( which I'm) , PDF is the king.
Ok, I said I'd put the HTML/PDF thing to rest, but I'll try and get my thoughts across again: I found ConTeXt via the web. Almost every single other software project I've ever found, I've found via the web. I did not find ConTeXt via a printing house (perhaps others do; I'm getting the impression I'm a bit of an outlier in this community). HTML is typographically crude, but, and this is important, *informationally*, HTML (and the web and friends) is far from crude. The web is not a vast flat collection of PDFs. It's the unchallenged superglue of the web, which is where I feel that the community should properly lie. Now, it's quite possible that other people disagree with me here, and that I'm factually wrong -- for example if the ConTeXt community predominantly lies in the 'real-world', with gatherings, seminars, with handed-out printed leaflets and manuals, with overhead slide presentations -- in *that* case, then yes, PDF is king.
TeX and PDFs are no replacement for the interconnected power of the web. When I want a quick piece of information in <10 seconds, I do not want to consult a hand-collected folder of PDFs, or google for it and wait the age for a PDF to load. I grep the code. It works even offline and in less than 1 second.
Yes. But the web works (albeit only while online, but who is ever offline?) in less than a second too, and the web is far more than a 'World Wide Grep'. It's an unimaginably vast cross-referenced semantically aware net with search engines of huge processing power. Executing `grep interpretation of grave character *' unfortunately does not give quite the same result.
That kind of feeling, I guess, is the reason that the contextgarden wiki exists. But nor is Mediawiki is really not the most appropriate way to document a project. Wikis are messy and unstructured. They don't lend themselves well to the hierarchical kind of structure appropriate for representing a codebase. So I'm suggesting that ConTeXt be documented using a typical established documentation system. I disagree. minimals should be self-cointained. a documentation system not done in Context can introduce a useless dependency.
Anyway even if there is already http://foundry.supelec.fr/gf/project/modules/scmsvn/ (which is only usefula as testbed, not for documentation)
or if we will have something like cseq one day (see http://www.tug.org/utilities/plain/cseq.html, possible made in automatic fashion from code base)
This looks lovely.
or a wiki book (see http://en.wikibooks.org/wiki/LaTeX apropos of "Mediawiki is really not the most appropriate way to document a project" )
it will be not enough --- a good starting point, of course.
In the end, one needs to understand the language, his semantic and study the code. With TeXBook, a couple of manuals from pragma (cont-en, metafun) and the code you are ok (well also ~1000 pages of pdf specs. are not bad and also some book about fonts ...).
Mmm, yes, you've made quite a lot of demands there on the curious programmer having stumbled across ConTeXt ...
Others are articles, and they are ok too. TeX is a macro language. There are almost ~1000 macros , and maybe ~500 macros in ConTeXt. Even if we are able to "documents" them in some manner, understanding them and their relations is a matter of study the code.
I don't think so. The "just study the code" approach shows an awfully austere, reductionist philosophy. Humans understand things from the top down. It's the computers that work from the bottom up.
About model of development: one developer is not so strange afterall .
I'm not sure what your point is here. That user contribution leads to 'featuritis'? I totally understand that being 'frozen' is not a bad thing; it effectively means 'having reached a state of perfection for the defined task' -- I don't think this has a connection with having one developer. More developers == faster rate of approach to the limit of perfection.
No, not necessarily and not in this situation. For TeX frozen means no new features, only bugfixes; it means that the language is maintained and backward compatibility is very important. (about 80% of scientific articles are in TeX, so backward compatibility is really important) . It doesn't mean that the language is perfect. To me frozen simply says that "it's time to explore the semantic of the language rather than add new features"
This model doesn't imply that you cannot contribute to the code base but only that all contributions need to be validate (and possible rejected) and integrate by developer,. You can also contribute with third part modules, but they are not in base code and in case of conflicts code base wins.
Sure thing -- revision control doesn't hinder that at all. If Hans
want to merge someone else's changes to his (authoritative) copy of the repo, then he doesn't have to. DVCS != chaos. One developer assure that there is exactly one version e no forks (friendly or not). This is also ok because there is no need for forks (afterall none are
doesn't thinking to fork LaTeX2e):
I think you're thinking of 'forking' as something dangerous (yeah, the word sounds painful), as something that will fragment the community, as something that destroys the concept of 'authority'. It's really not. Where you get forking you get merging at roughly the same rate.
If Hans doesn't want to merge someone else's changes to his (authoritative) copy of the repo, then the changes are rejected from the code base.
I'm not saying that a dcvs is useless for documentation or manuals. But without contributors a dcvs can be practically useless, and the only contributors for manuals actually are Taco for luatex and Hans for Context mkiv.
Why are they the only contributors?
-- luigi
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, 4 Mar 2010, James Fisher wrote:
I'm not saying that a dcvs is useless for documentation or manuals. But without contributors a dcvs can be practically useless, and the only contributors for manuals actually are Taco for luatex and Hans for Context mkiv.
Why are they the only contributors?
Because no one else (myself included) has actually contributed anything to the documentation Compare http://foundry.supelec.fr/gf/project/contextman/scmsvn/?action=ScmStats vs the number of "developers" http://foundry.supelec.fr/gf/project/contextman/ To be honest, other people have contributed, especially in translations of the documentations, and documenting some exotic features. But most beginner level and user documentation is written by Hans and Taco. In my opinion, it is hard to write coherent documentation in a distrbuted manner (different writing styles, etc.). You are saying that it is just a matter of having the right infrastructure. Judging by the way things have evolved in the past, I am not so sure. If you really want to test how online documentation will work, you can try to convert parts of the beginners document to html. Compare http://foundry.supelec.fr/gf/project/contextman/scmsvn/?action=browse&path=%2Fcontext-beginners%2Fen%2Fma-cb-en-itemizations.tex&view=markup with what you are writing using sphinx. Aditya
On Thu, Mar 4, 2010 at 8:44 PM, James Fisher
ConTeXt was not created to produce documentation for ConTeXt. This is not the point. The point is that code documentation of ConTeXt can be made with ConTeXt . see for example http://foundry.supelec.fr/gf/project/modules/scmsvn We don't need Sphinx or similar, but of course Hans can decide to use it.
HTML is typographically crude, but, and this is important, *informationally*, HTML (and the web and friends) is far from crude. true and your job is good.
Mmm, yes, you've made quite a lot of demands there on the curious programmer having stumbled across ConTeXt ... None is saying that it's easy. And, really, it's not easy.
I don't think so. The "just study the code" approach shows an awfully austere, reductionist philosophy. True but I have not said this. TeX comes with TeXBook ("high-mid-low" level" manual ) and Tex-The program- (the code) It's the same here, more or less.
Humans understand things from the top down. It's the computers that work from the bottom up. Humans understand things in bottom-up, top-down , try-and-error and probably other ways that we can understand enough to formalize. Working with TeX is a mix of bottom-up, top-down try-and-error and fortune.
I think you're thinking of 'forking' as something dangerous (yeah, the word sounds painful), as something that will fragment the community, as something that destroys the concept of 'authority'. It's really not. Where you get forking you get merging at roughly the same rate.
No, not dangerous. Actually useless . And yes, actually community and authority are important in this context. Why is so hard to understand ?
Why are they the only contributors? See Aditya. Apart from translations, Taco and Hans are the only persons that actually are able to produce a minimal, complete and exhaustive documentation.
-- luigi
(Can I leave all of this for a bit? I'll reply tomorrow, I think, but
first...)
I'd like to go back to the very first post about problems with flush right.
The \setbreakpoints command works to an extent, but I'm still experiencing
issues where, when a hyphenated string has been broken, the first half of it
still sticks out. I unfortunately can't show you the example, and it's hard
to reproduce. But can anyone answer: does the TeX line-breaking algorithm
retain the possibility of lines overrunning the defined boundary, if the
algorithm decides that the alternatives are more ugly?
James
On Thu, Mar 4, 2010 at 8:47 PM, luigi scarso
On Thu, Mar 4, 2010 at 8:44 PM, James Fisher
wrote: ConTeXt was not created to produce documentation for ConTeXt. This is not the point. The point is that code documentation of ConTeXt can be made with ConTeXt . see for example http://foundry.supelec.fr/gf/project/modules/scmsvn We don't need Sphinx or similar, but of course Hans can decide to use it.
HTML is typographically crude, but, and this is important, *informationally*, HTML (and the web and friends) is far from crude. true and your job is good.
Mmm, yes, you've made quite a lot of demands there on the curious programmer having stumbled across ConTeXt ... None is saying that it's easy. And, really, it's not easy.
I don't think so. The "just study the code" approach shows an awfully austere, reductionist philosophy. True but I have not said this. TeX comes with TeXBook ("high-mid-low" level" manual ) and Tex-The program- (the code) It's the same here, more or less.
Humans understand things from the top down. It's the computers that work from the bottom up. Humans understand things in bottom-up, top-down , try-and-error and probably other ways that we can understand enough to formalize. Working with TeX is a mix of bottom-up, top-down try-and-error and fortune.
I think you're thinking of 'forking' as something dangerous (yeah, the
word
sounds painful), as something that will fragment the community, as something that destroys the concept of 'authority'. It's really not. Where you get forking you get merging at roughly the same rate. No, not dangerous. Actually useless . And yes, actually community and authority are important in this context. Why is so hard to understand ?
Why are they the only contributors? See Aditya. Apart from translations, Taco and Hans are the only persons that actually are able to produce a minimal, complete and exhaustive documentation.
-- luigi
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, 4 Mar 2010, James Fisher wrote:
I'd like to go back to the very first post about problems with flush right. The \setbreakpoints command works to an extent, but I'm still experiencing issues where, when a hyphenated string has been broken, the first half of it still sticks out. I unfortunately can't show you the example, and it's hard to reproduce. But can anyone answer: does the TeX line-breaking algorithm retain the possibility of lines overrunning the defined boundary, if the algorithm decides that the alternatives are more ugly?
Yes. Try \setuptolerance[tolerant] or \setuptolerance[verytolerant]. Aditya
Perfecto.
On Thu, Mar 4, 2010 at 11:36 PM, Aditya Mahajan
On Thu, 4 Mar 2010, James Fisher wrote:
I'd like to go back to the very first post about problems with flush
right. The \setbreakpoints command works to an extent, but I'm still experiencing issues where, when a hyphenated string has been broken, the first half of it still sticks out. I unfortunately can't show you the example, and it's hard to reproduce. But can anyone answer: does the TeX line-breaking algorithm retain the possibility of lines overrunning the defined boundary, if the algorithm decides that the alternatives are more ugly?
Yes.
Try \setuptolerance[tolerant] or \setuptolerance[verytolerant].
Aditya
___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki!
maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://tex.aanhet.net archive : http://foundry.supelec.fr/projects/contextrev/ wiki : http://contextgarden.net
___________________________________________________________________________________
On Thu, Mar 4, 2010 at 8:44 PM, James Fisherwrote: > I think you're thinking of 'forking' as something dangerous (yeah, the word > sounds painful), as something that will fragment the community, as something > that destroys the concept of 'authority'. It's really not. Where you get > forking you get merging at roughly the same rate. Just an example. I have made a sort of "fork" of luatex 0.46 with luatex lunatic --- see last eurotex meeting. This is what I have learned 1) it's doable by every one with some skills in programming 2) it's nothing new from typographical point of view 3) we -- as TeX community -- don't need it. So it's really true that one can modify/fork luatex for his needs --- and I will do it again, I have other binding on my list. It's also true that in this way luatex+mkiv can become your powerful and private tool for your particular workflow, or that in this manner some modifications can enter in main luatex, if Taco thinks that they are ok For example actually I see more and more problems in dynamic loading, so I think that my modifications are not ok for luatex --- but Taco has the last word , and it's not a problem for me. But, still, we -- as TeX community -- don't need it . Actually we must support Taco and Hans in their job of development luatex and mkiv with testing and meaningful request; development team is up and running from about 5years and they made a really good job until now and I see no reason for changes . (I'm not on dev. team btw. , so it's my opinion) This is why I don't see documentation as a high priority --- of course I'm always waiting the next pdf from Hans. -- luigi
participants (5)
-
Aditya Mahajan
-
Arthur Reutenauer
-
James Fisher
-
luigi scarso
-
Vyatcheslav Yatskovsky