On Sat, 6 Oct 2018 13:06:18 -0400 (EDT)
Aditya Mahajan
In my opinion, a better long-term option is to write a jupyter client in lua that can be called by context. Then we can easily interface with all languages that provide a jupyter kernel (https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).
The interface of a jupter-client is available here https://jupyter-client.readthedocs.io/en/stable/index.html. It seems relatively straight forward (send a JSON message and receive a JSON message). Translating the JSON messages to ConTeXt should also be easy. Is there anyone who wants to play around trying to implement this?
jupyter runs python code. Have you ever tried doing any real heavy data analysis using jupyter? My experience is that it chokes on large data sets... So why write lua code to call a jupyter kernel running python? Would it not make more sense developing code directly in lua in this case? The one thing that python (and jupyter) brings, or R for that matter, are libraries of calculation routines. These can be quite sophisticated, some efficient, and some not so efficient. My approach has always been to write my own routines or to adapt algorithms, at least then I know what the calculation is actually doing. Of course, this means that I spend time redoing what might have been done elsewhere, but the variety of routines that I actually use is rather small. Our experimentation with lua and with MetaPost is that one can achieve HUGE differences in efficiencies through efficient programming, factors of 2-3 or even orders of magnitude. (Hans can usually succeed in speeding-up my lua code.) Sometimes surprising things can make huge differences (never surprising once one understands what is happening). One can hope that python and R (and other) developers are efficient-minded programmers, but this is not always the case. Alan