GPT-3 and the Rise of Omnipresent AI-powered Nonsense
2 min read

GPT-3 and the Rise of Omnipresent AI-powered Nonsense

Last week the world went nuts over GPT-3, OpenAI's newest AI language model.

But for anyone who played with its ancestor GPT-2 last year, this should hardly be news. The fidelity of the newest itteration was announced ahead of time (OpenAI didn't release their most potent models last year to give the opportunity for public debate).

So why the hype?

All GPT models (including GPT-3) are Chinese rooms. They predict the next word, given all of the previous words within some text, reconstructing language perfectly, but without real understanding.
Of course real understanding is hardly the point here. After all, asking if a computer can think is like asking if a submarine can swim. Creating a message without having something to say, is exactly the point though.

These models talk eagerly but say nothing of consequence.

For now, the first applications are in the curiosity end of the spectrum: deepfake text, and UI automations.

GPT-3 and Weaponized Information Payloads

To explain better the concept of weponized AI-content creation, I'd like to take Peter Watts' thought provoking Sci-Fi novel Blindsight as a point of reference.

The plot revolves around a group of transhumans send out to make first contact. As it turns out, the alien in question is more like a Von Neumann universal constructor and a Chinese room, rather than life and consciousness as we know it.

And (of course) it's hostile. After accidentally stumbling across RF-broadcasts from Earth, it concluded that they are in fact an attack – what else would an alien call human nonsense constructed in an intelligent way that consumes your attention (resource) for zero payoff? Ok, you could also call it Clickbait.

Quoting from the book:

There are no meaningful translations for these terms. They are needlessly recursive. They contain no usable intelligence, yet they are structured intelligently; there is no chance they could have arisen by chance.

The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness.

The signal is a virus.Viruses do not arise from kin, symbionts, or other allies.

The signal is an attack.

Unfortunately, we’re all surrounded by nonesense that pose as useful messages. Nothing is random, everything we see on our feeds is optimized for engagement, and is engineered to either influence our thinking or steal our time.

And GPT-3 will be used to carry out information wars at scale with minimum effort, either in the form of deepfake texts, automatically generated content in response to trending topics, fake news and more.

We are at a turning point and should be prepared to witness our society's constant commentary of AI-generated deepfake nonsense during the next years.

And it won't stop there. Last year I wrote a piece on GTP-2 and AI-marketing, specifically exploring how AI-generated content can work together with SEO to create autonomous PBNs (Private Blog Networks); website clusters that spin-up content and links on demand, manipulating search engines.

Final Thoughts

AI is not a force of good or evil, technology can be used either way. What the above analysis really mean, is that we need to be mindful of how technology is weaved into our reality.

The 21st century informed individual should know that seeing no longer means believing. That someone is always watching. And that money moves in blocks.