Sam Altman, one of the founders of OpenAI, said that GPT-3 offers a lot more than GPT-2, but it’s being overhyped. (Source:

OpenAI’s Advanced API Lets Programmers Test Out the Most Robust Programs Yet

Since July when GPT-3 debuted, programmers have been playing with the application programming interface online to create everything from poetry and essays to a fake blog that fooled thousands of readers and gained surprising traction in a short time. Some view GPT-3 as the most stunning progression of AI in years. Others believe its just the latest evolution, though one that could set the stage for future neural networks with more capabilities.

Before exploring what the excitement is all about, it’s important to understand what GPT-3 is and is not. website offers a fairly concise explanation, from using it for creating a web layout generator and writing SQL code to generating spreadsheets and viral tweets.

The AI software GPT-3, the acronym for Generative Pre-Trained Transformer 3, is a language model built on statistical maps of language probability. It was trained on far more data than its predecessor, GPT-2, making its language skills far more impressive. It is a neural network, but not exactly a “thinking” program, which will be further explored. As the Economist magazine explains:

“The more text to which an algorithm can be exposed, and the more complex you can make the algorithm, the better it performs. And what sets GPT-3 apart is its unprecedented scale. The model that underpins GPT-3 boasts 175 billion parameters, each of which can be individually tweaked—an order of magnitude larger than any of its predecessors. It was trained on the biggest set of text ever amassed, a mixture of books, Wikipedia and Common Crawl, a set of billions of pages of text scraped from every corner of the internet.”

To put that into perspective, the first GPT in 2018 contained 117 million parameters—the weights of the connections between the network’s nodes—and GPT-2, released in 2019, contained 1.5 billion parameters.

Programmers were allowed to start testing it upon release, and have posted some fascinating results. Among them are a poem GPT-3 generated about Elon Musk, a detective short story starring Harry Potter, comedy sketches and articles. While some of the examples show exciting potential, others are far more problematic, particularly when the program is asked questions. For instance:

“GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that ‘it takes two rainbows to jump from Hawaii to 17.’ ‘It doesn’t have any internal model of the world—or any world—and so it can’t do reasoning that requires such a model,’ Melanie Mitchell, a computer scientist at the Santa Fe Institute, told”

Even more concerning is that the biases derived from the data used to train GPT-3 are present in the program, even though OpenAI added a filter meant to suppress them after it received complaints.

“Prompts such as ‘black,’ ‘Jew,’ ‘woman’ and ‘gay’ often generate racism, anti-Semitism, misogyny and homophobia. That, too, is down to GPT-3’s statistical approach, and its fundamental lack of understanding.”

In spite of its flaws, some programmers are giddy over what can be achieved with GPT-3. The Musk poem, written in Dr. Seuss style, for instance, is amazing for being generated by a machine:

“Musk,/your tweets are a blight./They really could cost you your job,/if you don’t stop/all this tweeting at night.”/…Then Musk cried, “Why?/The tweets I wrote are not mean,/I don’t use all-caps/and I’m sure that my tweets are clean.”/“But your tweets can move markets/and that’s why we’re sore./You may be a genius/and a billionaire,/but that doesn’t give you the right to be a bore!”

MIT Technology Review called the API “shockingly good—and completely mindless.” The story on points out that GPT-3 represents the beginning of a major tech breakthrough, but it leaves much to be desired.

“…when a new AI milestone comes along it too often gets buried in hype. Even Sam Altman, who co-founded OpenAI with Elon Musk, tried to tone things down: ‘The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.’ ”

The Verge called GPT-3 “the ultimate autocomplete” that represents “a leap forward” for AI technology, similar to how AI image processing advanced from 2012 on. Among the story’s highlights:

  • A question-based search engine. It’s like Google but for questions and answers. Type a question and GPT-3 directs you to the relevant Wikipedia URL for the answer.
  • A chatbot that lets you talk to historical figures. Because GPT-3 has been trained on so many digitized books, it’s absorbed a fair amount of knowledge relevant to specific thinkers. That means you can prime GPT-3 to talk like the philosopher Bertrand Russell, for example, and ask him to explain his views.
  • Solve language and syntax puzzles from just a few examples. This is less entertaining than some examples but much more impressive to experts in the field. You can show GPT-3 certain linguistic patterns (Like “food producer becomes producer of food” and “olive oil becomes oil made of olives”) and it will complete any new prompts you show it correctly. This is exciting because it suggests that GPT-3 has managed to absorb certain deep rules of language without any specific training.
  • Code generation based on text descriptions. Describe a design element or page layout of your choice in simple words and GPT-3 spits out the relevant code. Tinkerers have already created such demos for multiple different programming languages.