Stay safe on the zombie internet
Earlier this year, digital news organization The Wrap published a story reporting that cybersecurity firm CHEQ had declared up to 76% of Twitter’s traffic during the Super Bowl was bots. We won’t get into the details here — that’s by far the most important bit and you can find the rest easily enough — but it’s a critical detail that flew somewhat under the radar.
In the last 30-odd years, the internet has cemented itself: in our workplace, in our culture and in our downtime.
If you’re above a certain age, you likely remember the early internet as a wide-open frontier, populated by weird niche enthusiast websites and bustling forum/messageboard hubs where people could communicate from all over the world about practically any topic imaginable.
As time has progressed, of course, humanity figured out how to market on the platform, and with money came malicious actors.
Who remembers Mydoom, Conficker and Sasser? The internet has long held dangers for the unaware and unprotected.
These dangers have only grown more complicated and pervasive as time has passed and the internet has grown more developed and centralized.
Cambridge Analytica, anyone?
Now, the internet is full of people who aren’t real promising illusions of grandeur.
Some of these — accounts based on the personal data of people who have no idea they’ve been compromised — are some of the most emotionally predatory.
Here’s the thing: without putting in way more effort than you are likely willing to, we have reached a point with the internet where reality and simulacrum are indistinguishable to the casual or inexperienced observer.
None of this is to undersell the utility and power the internet provides, to be clear. Being able to video call relatives in far-away states in real time is a modern marvel, for one, and the breadth and depth of knowledge available online is a contemporary Library of Alexandria, for another.
But there are risks, and there are dangers. Sometimes, it turns out that things aren’t just in your head and people are actually out to get you.
It is perhaps cosmically ironic that just as many of us grew up with our parents telling us “don’t believe everything you see on TV,” now we in turn have to tell our parents, “don’t believe everything you see online.”
Make no mistake: we, as a species, love technology. The latest and greatest gadgets and gizmos never fail to amaze and amuse, and that is as true now as it has ever been.
But, at least in the current moment, for as much as we love technology, we hate science at least that much.
Science says, “slow down” — and there are few things we want to hear less in our era of instant gratification.
Science says, “here’s the data” — and nobody wants to take the time to understand it.
Science says, “you’re wrong, here’s the facts” — and absolutely nobody enjoys that experience.
But, without those folks in the background doing the hard work, the technology that we all love to use is built upon a slippery foundation indeed.
If we allow ourselves to lose our grounding and fall in an endlessly evolving digital hall of mirrors, then we will surely end up lost.
Companies — and governments, in some cases — today are pushing artificial intelligence as the next big thing. The next technological revolution.
The science says that these models — Grok, ChatGPT and the rest — are not true artificial intelligence. They are large language models. This means they ingest massive amounts of language (the data in the data centers you may have heard about) and process it based upon prompt-style inputs.
Think of it like a search engine with a grade-school ability to regurgitate information and derive basic assumptions from that information based on the human experiences and decisions it has consumed.
Because it is based on those consumed human experiences and decisions, however, it inherently has the same flaws of biased and faulty reasoning that we do.
Even ignoring the crazy outlier tales about people trying to marry these models or getting fed bad advice — in some cases even being led to suicidal conclusions — these models being held up on a pedestal represents a new danger in a digital environment bustling with them.
Nor are the dangers likely to subside in the years to come — especially as the models get hooked into the bot networks that already exist. This could take the dead internet theory — which you can read more about in the opinion column published alongside this editorial in today’s paper — and create instead an undead internet full of restless bot-zombies waiting for fresh, uncorrupted brains to devour and twist.
Some argue that it already has.
Use the internet. Use social media. Heck, use the “AI” models if you wish to or find them useful. Our technology is at the core of our society for a reason.
But use them with caution. Prioritize your online safety just as closely and carefully as you would your real-life safety.
Our technology is advancing faster than our science — let alone our philosophy or morality — can keep up, and that contributes to an internet that is dark and full of terrors.
Keep an eye out for each other out there.