GPT-3 — let’s define it as the autocomplete tool by OpenAI trained on a large amount of uncategorized text from the internet — is quite impressive, comparable to what happened to AI image processing from 2012 onward. We can safely ignore the hype — it’s probably a dead end in terms of reaching artificial general intelligence (see its performance on the Turing Test). And I doubt it’s going to replace developers. But as an autocomplete, at guessing common sense or trivia questions, it’s a leap forward. Here I am asking Alexa for the third time to lower the volume, and this thing can almost handle a conversation.

Anyway because of how it works (you give it some text, a prompt, and it guesses what comes next) in recent weeks the internet got inundated with things they made GPT-3 do. There’s even a course in creative writing taught by GPT-3 (which is probably as valid as most creative writing courses).

Like all good AI GPT-3 never admits of not knowing an answer, it’d rather make up stuff, weird stuff sometimes, nonsense but nicely written nonsense. It might not make sense but at least it’s syntactically correct. It’s an idea machine, and a quite funny one. Here’s one of its replies, when prompted by Arram Sabeti to write an essay on human intelligence:

I propose that intelligence is the ability to do things humans do. [..] The brain is a very bad computer, consciousness is a very bad idea.