4 Comments

I found that article strange because it was Netflix confidently declaring that their way of organizing information (specifically movies and TV shows) was successful and effective and that's what people wanted...and like you I found it unintuitive. I'm a user researcher so I expect to find that people's stated motivations differ widely from the Big Data theories about how and why and therefore how they should "help" us. I mean, it seems very very Netflix to do this and to trumpet it so proudly and sadly it's very mainstream media to simply accept it as a given. (I haven't gone back to re-read the piece since looking at it the other day so sorry if I misspeak here)

Expand full comment

You put forth a compelling case! It’s maybe another way to approach something that I’ve always felt about VC futurists in the age of microdosing and Steve Jobs-style visionary capital: That they are essentially neo-platonists, which is to say, occultists in an almost literal sense -- questions like “what is happening when you envision something?” and “What is it that you think of when you think of a dog?” become absurdly serious.

You get the sense that AI is being seen as a machine “mind” capable of crossing the barrier away from imperfect reflection (the dogs we see every day) and access / reveal the perfect thing itself (the dog you’re imagining). Magickal thinking.

Not for nothing that this kind of occultist thought, so focused on the distance between the imperfection of the real and the perfection of the imagined tends to flare up alongside the rise of fascist movements, and finds a lot of sympathy with their aesthetic drives -- Thomas Bernhard famously said he couldn’t tell the difference between nazis and postwar Catholic intellectuals. These days you can’t tell the difference between a nazi and an engineer taking note of how many Platonic solids he can concurrently rotate in his mind’s eye.

Expand full comment

I'm interested to see where you might take this topic. There's been a bit of discussion recently on the interwebz about whether LLMs are 'stochastic parrots' or something more closely resembling human intelligence, which I'm just starting to read into. However, I can only see the latter being plausible if consciousness is taken out of picture of the human intelligence - which might imply some circularity: i.e. human intelligence understood as a highly complex machine and can therefore be approached by a less highly complex machine. My understanding of Saussurian structuralism from general linguistics has never encompassed a stochastic dimension, which may reflect the formalist approach I was exposed to.

Expand full comment

This is right up George Carlin's alley. I can hear his tone as I read it.

Expand full comment