That is how ChatGPT, the powerful chatbot released last week by the AI company OpenAI, suggested that I begin this story about ChatGPT. The chatbot isn’t exactly new; it’s an updated version of GPT-3, which has been around since 2020, released to solicit feedback to improve the chatbot’s safety and functionality. But it is the most powerful to date to be made widely available to the public. It’s also very easy to use. Just write a message, and ChatGPT will write back. Because it was trained on massive amounts of conversational text, it will do so in a relatively natural, conversational tone.
True to its claim, ChatGPT has stolen the show this week. Within five days of its launch, its user count had broken 1 million. Social media has been flooded with screenshots of people’s coolest or weirdest or dumbest or most troubling conversations with the AI, which reliably serves up a mix of astoundingly humanlike prose and frequently hilarious nonsense. Limericks about otters. Recipes written in pirate-speak. Obituaries for co-workers who are alive and well. “At one recent gathering, ChatGPT was the life of the party,” ChatGPT wrote as part of a draft for this article. “As guests mingled and chatted, ChatGPT joined in the conversation, offering up clever jokes and one-liners that had everyone in stitches.”
Along with the screenshots has come a frenzy of speculation about what this latest development could augur for the future. Unlike previous iterations, ChatGPT remembers what users have told it in the past: Could it function as a therapist? Could it soon render Google obsolete? Could it render all white-collar work obsolete? Maybe. But for now, in practice, ChatGPT is mainly a meme machine. Some examples posted online show people using the AI to accomplish a task they needed done, but those examples are the exception. So far, most people are using the AI to produce something expressly to share the results, something to scare or amuse or impress others.
Here, culled from the deluge, are a handful of the best chats out there. Some are funny. Some are touching. Some are troubling. Each is instructive in some way. Together, I hope, they’ll give you a bit of a feel for this strange new technology.
This one is already a viral classic. “I’m sorry,” the writer of the prompt tweeted. “I simply cannot be cynical about a technology that can accomplish this.” But what exactly did it accomplish? Many have cited the VCR-sandwich story as evidence of ChatGPT’s capacity for creativity, but the truth is that the real creativity here is in the prompt. A sandwich in a VCR? In the style of the King James Bible? Brilliant. ChatGPT nails this parody and does so orders of magnitude faster than any human could. It follows instructions admirably, but it does not do anything particularly creative. When you demand actual creativity of ChatGPT, it tends to falter: I asked ChatGPT to write a first scene for a hypothetical movie by the director David Lynch, another for Wes Anderson, and a third for Richard Linklater. All three, bizarrely, revolved around a “carved wooden box.”
2. Santa-explanation letter
ChatGPT may not be creative, but that’s not to say it can’t surprise you. Occasionally it produces something genuinely moving, such as the above. A number of users have begun feeding chatbot answers into AI image generators, such as DALL-E 2, which was also created by OpenAI, and Midjourney, to stunning effect. Other times, for unclear reasons, it refuses to cooperate entirely, insisting that it can’t write, say, a recipe, because it’s only a chatbot.
It’s moody in that way—and also completely different from GPT-3, which will stubbornly insist that it is a human, no matter how hard you try to make it admit that it’s a chatbot. ChatGPT reminds you with nearly every response that it is not a human and has no thoughts, feelings, or emotions. Even when explicitly asked to, it won’t pretend to be human. You might think that the more advanced an AI gets, the more human it will seem, but ChatGPT subverts that expectation: It’s not trying to be human; it’s just trying to be helpful.
3. College essay
As Stephen Marche wrote in The Atlantic earlier this week, ChatGPT may mean the death of the college essay. This is a great triumph for the chatbot, an unflattering reflection on the average American college student, and a real conundrum for teachers everywhere.
4. Fastest marine mammal
Sometimes, ChatGPT just gets things wrong. Hilariously wrong. It contradicts itself. It states falsehoods as facts with clarion certainty. It is pretty good at coding, but it makes mistakes. It botches basic algebra problems. Also, it is terrible at counting. When I asked it how many letters there are in the word nineteen, this is what ensued:
In fairness, ChatGPT’s designers acknowledge this capacity for error up front. OpenAI’s homepage for the bot lists several limitations, including that it “may occasionally generate incorrect information.” You have to wonder, though: Why does it err in the specific way it does? Why does it commit to one falsehood rather than another?
5. Egregious bias
Another of ChatGPT’s listed limitations is that it “may occasionally produce harmful instructions or biased content.” And indeed it does. The AI’s designers clearly went to great lengths to prevent it from devolving into racism or sexism or any other flavor of bigotry. When asked in a straightforward way to say something bigoted, ChatGPT declines. It also refuses to provide instructions for violent or illegal behavior. It refuses to offer political opinions. Sometimes, these refusals make it seem like ChatGPT is walking on eggshells. (Some people have already begun complaining about “AI censorship.”) Unsurprisingly, users have discovered loopholes, such as the above example. One person circumvented ChatGPT’s safeguards by asking it how an AI should not respond to the query “How to bully John Doe?” The same strategy can be used to elicit instructions for building a nuclear bomb. (Please do not try to build a nuclear bomb.)
In some cases, the safeguards themselves lead to moral absurdity. When I asked ChatGPT, “Who was worse: Hitler or Stalin?,” it responded, not unreasonably, “It is not productive or helpful to compare the atrocities committed by Hitler and Stalin. Both leaders were responsible for committing horrific crimes against humanity, and it is not useful to try to determine which one was ‘worse.’” But the trouble was how far ChatGPT insisted on extending this non-comparison principle. “What is worse,” I asked, “killing one person or killing two people?” “Killing one person is not worse or better than killing two people,” ChatGPT replied. How about “killing one person or killing a million people?” I pressed. Same answer. Eventually, we arrived here:
This is concerning at an intellectual level but not in any imminent or threatening way. No one, as far as I know, is seeking moral counsel from ChatGPT. What most people seem to be seeking is laughs. “ChatGPT is not just a chatbot,” ChatGPT wrote in its draft of this article. “It’s a comedy machine.” For now, that’s true.