One of the main assumptions behind large language models is that words are no different than numbers. All words can be reduced to numbers; all meanings can be assigned a definite value that is ultimately arbitrary. Meaning itself is a meaningless subjective mirage, something that can finally be eradicated once language is solved once and for all, its workings fully formulated and rendered calculable, like the operations of any other piece of machinery.
Your idea that positing language as not very different from math or numbers ultimately implies that LLMs are a way to program the capabilities of humans and mask spontaneity is very compelling. It's also very interesting to contrast this with how LLMs are already branded - as a way to "free us up for creative/spontaneous work" (for writing, for example, or even in the newsroom, which is my domain specifically). This essay gave me lots of stuff to think about, as always!