Part of the process of stoking hype around generative AI are the many articles and tweet threads promising to let readers in on the tips and secrets of how to better prompt the models, to unlock their true potential. Don’t be too proud to ask it to explain things as if to an eight-year-old!
Many of these productivity hacks strike me as short-sighted even when they are not infantilizing. The interface with chatbots will not always appear only as a command line or an open-ended search box, especially given that the sales pitch for AI is that it will make information retrieval more convenient and dummy-proof. There is no reason to think any aspect of an automating process won’t itself be further automated. It’s easy to imagine “prompt engineering” being boiled down into various APIs that establish specific contexts for a model’s outputs: a set of sliders and presets that determine how you can tweak a model’s responses, or a series of questions and drop-down menus to condition a model’s output according to a boss’s demands. But I don’t think there will be much of an enduring edge for most people in knowing how to craft artisanal prompts as if they were magic incantations or coding tricks.
Keep reading with a 7-day free trial
Subscribe to Internal exile to keep reading this post and get 7 days of free access to the full post archives.