But in the post you linked to, it seems two fragments can be too much already. So not all fragments are equal in this respect.
Maybe there is a misunderstanding.
At the time of the mentioned post even 2 fragments meant that you had to map with --mem, and, for the specific case of large img on small NTFS we tested successfully the workaround of making a very small NTFS volume and then enlarging its filesystem.[1]
Some time later someone (I believe karyonix) made a modification to the grub4dos (that became official) to allow mapping fragmented files.
I never tested it (as I try to have actual images contiguous), but suspected that the number of fragments allowed by this modification were a handful (which you just confirmed experimentally amount to 4).
How many fragments (on average) fit in the 1200 characters limit you just found?
If we say that - even on a largish disk - a fragment (including the separating comma) is within the 12 characters, this makes "up to 100 fragments roughly" possible.
All in all, a file with more than 100 fragments (or so) should (hopefully) be very, very, very rare.
BUT (stupid idea maybe).
Is it possible to write to (md) the list of blocks then prepend to it the map command, append to it the (rd) and then copy/dd the whole stuff to cmd buffer?
This way no variables would be needed (for the fragments).
Still, I suspect that the gain would be minimal, because of the "other" 1535/6 limit on command line.
The "proper" way is most probably mapping each fragment (or - say - sets of 10 fragments) to a (rd) moving each time the rd_addr, then finally move back the rd_addr to first block and set rd_size to the whole file size.
Wonko
[1] which remains a nice trick for a few other things, like using truncated images.