2 Comments
Apr 5, 2023·edited Apr 5, 2023

Sounds like the gray goo problem applied to knowledge.

Expand full comment

> LLMs won’t just explain how to do things, either; they’ll draw on the collective accumulation of human wisdom to offer ideas on how to solve problems.” You won’t have to know how to research and you won’t have to respect any copyrights on that “collective accumulation of human wisdom” (which, after all, amounts to nothing more than the largest number of words assembled in one place); you can just ask a model what to do and then follow its orders, if it can’t already execute the task itself.

That is really something, because it could possibly enable everyone to do good research themselves. Alas, most people will just use it for "trolling", I guess, or for superficial research: just ask the LLM and then never think about it.

We already have such things, they are called books or more specifically, lexika. Alas, same problem: reading them, thinking about it, using this to create new thoughts is very exhausting. Brain takes up most energy, blood sugar, oxygen - and that is like that even when the brain is idling. It gets worse once one used it.

So LLMs could be a breakthrough, but only for the few who use them to think. All others just use them as a shortcut for thinking themself, I fear.

This comes with a danger: when people use navigation software, it's known they tend to use their own capability to navigate (and so on for every other task we outsource). We lose our own capability to think if we let others think for us.

Expand full comment