To draw attention to machine learning research, it apparently helps to tout it as “dangerous.” The research company OpenAI received a lot of media attention last week by refusing to release the latest iteration of its text-generating/language-predicting model, GPT-2, claiming it was too good at producing readable text on command. “Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code,” they wrote
Let Me Have This Dust
Let Me Have This Dust
Let Me Have This Dust
To draw attention to machine learning research, it apparently helps to tout it as “dangerous.” The research company OpenAI received a lot of media attention last week by refusing to release the latest iteration of its text-generating/language-predicting model, GPT-2, claiming it was too good at producing readable text on command. “Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code,” they wrote