The Most Misleading AI Article I Have Ever Read

Steve Shwartz
4 min readSep 8, 2020
Photo: dlyastoki / iStockPhoto

I have seen a lot of hype about artificial intelligence over the years. However, an article I read today in The Guardian takes the cake.

The Guardian published a 500-word essay ostensibly written by a computer. The computer was running the GPT-3 language model and was given this paragraph as a starting point:

“I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

It was instructed to continue to generate a 500-word essay on this theme. The first three paragraphs generated by GPT-3 were

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me — as I suspect they would — I would do everything in my power to fend off any attempts at destruction.

Very impressive!!! In fact, all 18 paragraphs generated by GPT-3 were articulate, coherent, and convincing. The author (GPT-3) appears to be thoughtful and highly intelligent. After reading this essay, it is hard not to believe that Artificial General Intelligence (AGI) has arrived.

But how can this be? GPT-3 is just a language model that has collected statistics on a massive amount of text. All it can do is to take as input a string of seed text and generate the word that is most likely to be found next on the internet. Then it keeps doing this until it has generated a whole essay.

In a post earlier this summer, I dissected an impressive example of text generated by GPT-3 from the original GPT-3 article and showed that most of the facts were completely wrong. NYU Professors Gary Marcus and Ernest Davis recently performed their own testing of GPT-3 and concluded that it has no knowledge of the world and can only string together words and memorized sentences from the internet. And Janelle Shane has also contributed some very humorous examples of GPT-3 output that could only be generated by a machine with no ability to think and reason about the world.

So, how did it generate text that appears exceptionally thoughtful and intelligent?

The answer can be found at the end of the Guardian article where the author writes:

GPT-3 produced 8 different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places.

In my view, it is distressingly misleading to pretend that GPT-3 wrote this essay. It was really written by The Guardian. GPT-3 generated 8 text that probably included many interesting sentences it had memorized from the internet. But GPT-3 is incapable of organizing those memorized sentences into a coherent, thoughtful essay because GPT-3 has no ability to think or reason!

Instead, the editors of The Guardian took those memorized sentences and used them to compose an interesting, thoughtful essay. The editors demonstrated that they are thoughtful and intelligent. But don’t believe the article — GPT-3 didn’t write the essay and couldn’t have written it because GPT-3 cannot think and reason.

GPT-3 is not AGI. AGI is still science fiction.

Feel free to visit AI Perspectives where you can find a free online AI Handbook with 15 chapters, 400 pages, 3000 references, and no advanced mathematics.

Originally published at https://www.aiperspectives.com on Sept 8, 2020.

--

--

Steve Shwartz

Author of “Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity” published Feb 9, 2021 by Fast Company Press.