This week, in a widely anticipated move, Google and Microsoft both announced their intention to augment search engines with large-language models. It marks a shift away from “organizing the world’s information,” to borrow from Google’s mission statement, to simulating it. Whereas web crawlers once attempted to index the internet to keep up with its expansion, offering an updating map to an actual evolving territory, large-language models arrest and ingest as much of the internet’s content as possible at one particular moment to process it into a static set of statistical probabilities, which can then offer a procedurally generated terrain that is produced only as you navigate through it.
Hey, really enjoyed this piece. Clarified some things for me. I mentioned it in my last weekly retrospective: https://novum.substack.com/i/101103540/big-tech-stagnation
This is overall a good analogy, and I share the trepidation of answers without citations stifling further curiosity - but the filtering and paraphrasing is not new to LLMs. There is plenty of misleading compression and reconstruction of source material undertaken by human authors.