
On 5/1/25 13:34, Steffen Wolfrum wrote:
Hi Pablo,
… well, it was (again) a demand from publishers, it wasn't my idea.
Hi Steffen, I see… then allow me some comments. I’m afraid I cannot avoid thinking that the whole thing is not flawed by design. Article 4.3 of the Directive EU/2019/790 (https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32019L0790#00...) requires an explicit reservation to legally object to data extraction for commercial purposes (but not in scientific research). Well, we are all in by default, in some cases we might opt-out. Even if AI training complies with these data-extraction reservations, the vast majority of works from independent creators or small publishers may even not have means to add it (in machine-readable format). Requesting the addition of explicit permission (opt-in) would have been by far a more sensible policy. The problem still remains with single works and/or creators. How can we know that a single work has not been used to train AI? Even if one is caught infringing, it might be so common that nothing happens. We have already seen that with big media using and not complying with the terms of works released under free licenses. Of course, in that case small creators have the right to sue the infringers. But they lack the money, time and legal teams to even think of it. Just in case it might help in some remote case, Pablo