Just a wild guess - mind you - but the generic idea of a server/client setup is (was) that of a very powerful server capable of doing *anything* and the client to be as simple (and as low-power) as possible.
So the idea makes sense to me, a server can serve *any* size of packet as requested by the client, that possibly has a given (hardcoded) blocksize only in order to be "simpler".
Reading "between the lines" of the RFC2348 (which in itself is a late (1998) extension):
https://tools.ietf.org/html/rfc2348
the *feeling* I expressed above is reinforced, the TFTP is used because the client side needs to be as simple as possible (but not simpler ), the key word here is "negotiate", anything different from the standard 512 bytes block size needs to be negotiated by both parties, but it seems clear enough how the "strong" party in the deal is the server side, that receives a request for blocksize, and then either gives an acknowledgment (OACK) that is equal to OR one that is smaller than the request from the client .
So ultimately, it is the server that determines the actual blocksize to be used in the transfer.[1]
Then it is again the client "responsibility" to either accept the equal or smaller than blocksize in the OACK or terminate the connection with error 8.
Wonko
[1] This makes IMHO a lot of sense in another hypothetical scenario, after some specific network tests, a particular TFTP server could be programmed in such a way to accept (say) 8192 bytes blocksize from LAN IP's (or MAC's) known to be on a "fast branch" of the network, but only accept a max of (still say) 2048 bytes blocksize from LAN IP's (or MAC's) knowing to be belonging to a "slower branch" of the network, in order to get a valid compromise between speed and reliability.