Il 22/11/19 08:43, Jan U. Hasecke ha scritto:
Am 20.11.19 um 18:10 schrieb Henning Hraban Ramm:
Hi!
I’m running ConTeXt on my web server e.g. to generate shipping forms for a customer.
As Hans said, it makes sense to use an asynchronous setup; in my case it’s celery/RabbitMQ behind Django.
You probably need to set a few environment variables: I find HOME, PATH, TEXROOT and TEXMFOS in my setup. I don’t know if you really need all of them, it’s already running for several years… Also your web server process might only run binaries that belong to some user/group like wwwrun.
It won’t work in shared hosting, because you can’t install your own programs.
Generally this is correct. But there are providers such as the Hostsharing cooperative where it is possible to install programs in a shared hosting environment. We do it for years. ;-)
Otherwise: What kind of documentation do you need? Installing ConTeXt on a (web) server is not different from any other Linux system. Calling ConTeXt from a web application is not different from calling any other external program. The rest depends on your setup and web frameworks.
I am very interested in running ConTeXt as a service, too. I am still nurturing the idea of a publishing cooperative for self publishers with a Markdown --> Pandoc --> ConTeXt workflow with a nice web frontend. I hope to make it to the next ConTeXt meeting to discuss it.
I'm using ConTeXt inside a Docker container. The same container runs a Node+Express.js interface to accept documents and provide resulting PDFs, as well as job information while ConTeXt is typesetting a document. That way i get: - container isolation: i can have different ConTeXt versions between the container and the host (the same goes for other software) - thanks to the Node+Express, a client-server workflow over a network The problems arise when you need to move data to and from the container. Docker lets you share a local filesystem path with the container, but that's not possible on a network (with the same performance of a local filesystem, i mean). Maybe your sources are not too heavy, but your ConTeXt environments, graphics, fonts are probably heavier. So are PDFs produced by ConTeXt. That's why i'm using a docker container image that preloads all the configuration files (.tex and .lua), graphics, fonts during image building. It's not a generic, all-purpose ConTeXt container, it's tailored on a very specific kind of documents. That way the only things i'm moving around are XHTML sources and typeset PDFs. Things get really complex if you want to use containers to scale up, having a whole battery of identical ConTeXt servers distributed in a cloud. It's not my case, i have only one container in one physical server, reachable from many network clients. Stateless containers are indistinguishable, but ConTeXt containers hold state: your documents and all the data that is needed to typeset them. When you ask for a resulting PDF, only one specific container can answer you, unless that PDF has been put on a shared storage. But that could mean moving it over a network, adding latency and burden on the network. Suppose you slightly modify your document and want to typeset it again. Reusing the .tuc file of the previous version could save you some runs, but only the previous container knows about the .tuc file. So pick one: run ConTeXt on the same container or move the .tuc file over the network to another one. It's the general problem of managing state with containers. Another question: previewing the result of typesetting. When the PDF is in your directory, you can see it as soon as ConTeXt finishes a run. I keep evince open on a file while i'm running ConTeXt locally. When ConTeXt is working on a server, in the simplest setup you must wait until it finishes all the runs before you can download the PDF. That's the way i started. Now i take a local copy of the PDF inside the container, every time a single run ends; that way you can download the PDF just after the first run. But it also means downloading your PDF n times, where n is the number of runs. That's why i'm leveraging the Node+Express interface to provide a single-page preview: a rendering of a page at screen resolution, produced inside the container with pdftocairo; the image is also cached, so that it's updated only when a new PDF - from a new run - is ready. Instead of downloading the PDF n times, i get a light and early preview of single pages. And the interface lets you download the whole PDF once it's done. Massi