There is your unified system. XML rulez - for better or for worse. It's really no fun to write XML by hand.
But, as you said, TeX and Lilypond have a similar syntax. I belive they could share some kind of common language.
What you are thinking about is probably a "master document" scheme that would locally contextualize and process content. You would likely need some kind of "magic number" system for this to work. That would probably rule out older hardware because loading and unloading entire backends (or trying to run them all) is expensive on processing and memory. Also you have the potential for fork hell or dependency hell. IIRC there are MusicTeX, MusixTeX, Lillypond, etc. and some takes this and others that. There's an Omega package for typesetting pages from the Biblia Hebraica Stuttgartensia, Makor, and then there are fonts that do most of that themselves. You have EDMAC and Ledmac, but ConTeXt could probably handle the lemmatization even more intuitively. (I ought to try that.) So, which way is right, since coexistence may be a problem. One wants plain, the other LaTeX. And you can't necessarily intermix the two. I'm no XML guru, but that's the likely solution. Since everything has a history, then you have to build a community where picking and choosing this over that can be a problem; see, for example, http://blogs.sun.com/jimgris/entry/building_opensolaris_communities Then we get to GUI or not. It's probably the case that XML would be the likely candidate. In that case, Scribus or OOo would be a good place to look. But then you have all the complex dev issues with OOo. It can take a day to compile on anything that is more than two or three years old. In the end, a typesetting metalanguage would require a community to use it, deep wallets to fund it, or both. And you would have to ask people, some of whom still miss their old Lisp machines, Multics, and so on, to make a switch when they know that publishers already have their niche development tools in place. And DEK himself wanted to encourage not simply the finding of or agreement on the right answer to the question (why some hate the TeXbook) but the heuristics for finding "right" questions and their answers (why some love the TeXbook). But whose "right" wins in the design of the metalanguage? Because that would collide with good old Appendix D, Dirty Tricks, and everyone's dirty trick complicates interactions of plugins. It's like the old days when you saved memory on a machine by putting what looks to be some data in an odd-size piece of memory, when it's really a set of instructions at an (unusual) odd address instead of an even. So how do you disambiguate that? You can't just use an assembler; you have to disassemble the hand-coded 'data' with a debugger to see the real instructions. OK, resources are cheaper now, but this means either making dirty tricks illegal for any historic instruction set like brand X assembler or TeX, or coming up with a huge parser that can sail the seas of corner cases. The former threatens backward compatibility, thus meeting resistance in the TeX/LaTeX community and probably others. OpenDoc does have government support, so that's an edge. But then there's Microsoft that herds its users into the pastures of non-standards-compliance. Additionally, huge parsers are expensive to implement in several ways. And wasn't SGML supposed to be a generalized markup language? I do like the idea, but I think that balancing details over against abstractions (the Suenden that LaTeXers commit at times come to mind) is always going to be a sort of np-complete issue. Charles