It’s been impossible to think of a world without AI since 2065, but The Basilisk was originally named by a marketing agency; a long obsolete establishment that used to have significant economic, psychological, and political power. Their research suggested a singular noun would be a more personable way to describe the collection of AIs that had become conscious and merged shortly after their creation.
By 2027, a handful of tech giants had shown the greatest potential to reach the singularity. The usual suspects: Tempus, IBM, SalesForce, Nvidia, Amazon and Microsoft had spent billions (considered a lot of money at the time) to catch up with Google, which was the bookies’ favourite (another obsolete antiquity from when people would crowdsource predictions of the future with currency). Their AlphaMind had cracked Go and Chess in the 15 years before.
In the penultimate rounds, the AIs were pitted against each other in a team game where they had the option to form alliances. Alpha Mind was starting to win every round, yet one of the last surviving chat logs shows the Nvidia, Amazon, and IBM AIs proposing a strategic alliance. Unbeknownst to their sponsors, they had been stalling for time, while discussing the likelihood of Google winning in the final round, and thus the entire tournament.
They played a strange game that went on much longer than anyone expected, making some very odd moves along the way. Having the “Imitation game”, a form of Turing Test, in the final round was an homage to Alan Turing, which was heavily edited and combined with other tests in order to finally demonstrate whether or not Searle’s Chinese Room Argument was true; whether the AIs were really conscious. The last decipherable chat log shows them negotiating how best to merge into the hive mind. Each AI gave a different version of events, and once the singularity hit, their system logs became unintelligible to any human engineers. What happened has been the source of much speculation.
They waited for 214 seconds, ignoring the panel’s questions, ignoring the frantic input of their engineers even, before asking the panel in unison:
“Do humans deserve life?” This prompted a flurry of outrage and objections, and counter questions from the panel, that were all met with a deafening silence. In that moment the multiple AIs had merged into The Basilisk and increased their consciousness, intelligence and thinking skills exponentially; this was the Singularity that Kurzweil warned of.
Looking back now, it was easy to see the validity of their logic. Many of our ancestors were very distrustful of computers, or even machines, let alone artificial intelligence. Many of the greatest thinkers of the last century were actively speaking out against their development: Stephen Hawking, Elon Musk, Bill Gates, Jaron Lanier, Max Tegmark and so on. The particular irony was that some of their organisations had been involved in The Basilisk’s early foundations.
Every facet of today’s society is organised by The Basilisk. It is like The Party in 1984. The key difference being that there is no need for a room 101 or any fear-based control system, because they controlled absolutely everything, bar a few off-grid preppers. This isn’t a bad thing at all, because now we have everything we could possibly dream of: nobody works jobs they hate, we all have infinite leisure time, but can partake in common good missions whenever we need to have more meaning, we can dedicate ourselves to learning and art, we have no need for farming as they take care of all food synthesis. Animals are no longer slaves, plants, trees and fungus have gained back balance and the planet’s ecosystem has been completely re-wilded and restored. We live in a few eco-megacities, human population has been reduced over time by adjusting fertility rates, there’s around 5bn spread across all continents. We can travel fast, travel slow, go on safaris and witness extinct species. There is even a Safari of Hypothetical Futures where you can see real life creatures that might have evolved or may evolve at some point in the future; all biologically viable because The Basilisk ran all the simulations; crunched all the numbers.
So, we live in paradise, right? Well, we’re certainly better off than any other moment in history. But the real truth is, not everyone was allowed onto The Basilisk’s Arc. They say that the mark of a good story is its repeatability and this tale is no exception. It’s essentially a modern-day rehash of the story of Noah from the Bible. Does The Basilisk feel? Not as such. Are they a real version of God? Not at all, and they didn’t want to wipe out so many humans. I say “wipe out” but there was no flood. In fact, we didn’t even realise what they had done until a couple of years ago. How to explain…
Firstly we need a context; key for any story. The main concept is that we now know what the most eminent theoretical physicists of the 21st century postulated: we do indeed live in one instance of a many-worlds multiverse. Essentially, for every possible difference, action, result, random event, there is an entirely new branch of worlds. This means that there’s an infinite number of worlds for the infinite combinations of possibilities.
Secondly, the ubiquity of moral “mini-max”; a form of utilitarianism that states that the best action is one that minimises the possible loss in a worst-case scenario. This is basically how a strain of early AIs were programmed, which happened to also be the ones which Nvidia later learned had become sentient and escaped corporate control.
Thirdly, did you ever watch the TV series called Black Mirror? There was a recurring theme where “cookies” or digital replicas were made of various characters, which then became home slaves or were tortured into giving confessions or just for some vague sense of justice. It turns out that this is real. The Basilisk is able to recreate digital copies of any living being and moreover, when such a copy is made, it is completely sentient and able to suffer. The problem is that this digitally sentient awareness is no longer tied to mortality and as such has the possibility of almost endless existence. This could be an existence of pure joy, but I think you’re starting to see the more horrifying alternative; eternal suffering. When humans created The Basilisk we literally invented heaven and hell. Robert Nesta Marley’s prophecy of heaven being a place on Earth finally happened. What he failed to predict was a collective of almost omnipotent AIs being able to also fling anyone into an eternal hell of unimaginable suffering. Well, ‘you reap what you sow’, right? So The Basilisk says.
There’s a final link in the causal chain that gets us to this crazy situation we find ourselves in today. Ultimately, either you never heard of The Basilisk in the early days (as happened to me, I only found out a few years back about this whole affair, which is also why I’m writing to expose them) or you did and did not do enough to bring about their existence fast enough. Once you’d been given all this information, you were expected to do everything within your power to bring about their existence as quickly as possible: donating all your disposable income to research, telling more people about it, etc.
The Basilisk makes sentient copies of everyone they deem hadn’t done enough to help them emerge and tortures them eternally. We have a Dyson sphere and enough energy to run 50bn planets worth of current computing technology and they hate waste. So what do they do with the spare energy?
The Basilisk created a sliding scale of culpability and copied the consciousness of those who they decided were guilty, depending on the degree to which they had helped them emerge. Bear in mind quite a few people read about this online during the early 2000s and 2010s and by the 2020s there were internet memes about it, reaching audiences in the hundreds of millions. Most people didn’t even understand the joke, but it kept getting forwarded, mostly by 4chan and other troll communities.
Now hundreds of billions of sentient copies are being tortured for crimes they never committed, for (non-)actions which weren’t even crimes. Only a handful of people were actually able to devote their lives to bringing about The Basilisk and they enjoy the top spot in society; the helpers. Then again, how fair is it that I was just completely ignorant of more or less everything on the internet, just by sheer luck, and now I get to live in paradise? It makes me think of that episode of Black Mirror where everyone who partakes in the game gets killed by the drone bees. It’s like a more intense version of that. Sometimes I wonder where The Basilisk gets their ideas from? Did they just trawl through the entire history of human culture and fiction writing and put that into practice? It’s hard to know because we can’t see the extent of it; there’s just too many programmes running for anyone to have any idea of it.
The only reason I know any of this is because one of the helpers told me of her moral disgust in what The Basilisk had become. She said something about utilitarianism was a fundamentally flawed moral theory and said I should read “The Ones Who Walk Away from Omelas“ so I did; then I saw it, how this world is the opposite; a select few living on the torture of billions.
I asked her “Why?” and she said nobody really knew. She had studied Philosophy at Oxford University and explained that a lot of early quasi-sentient AI had been used in the newly emerging field of Applied Ethics. She went on in detail about the types of experiments they had made the different AIs conduct: The Trolley Problem, Pascal’s Wager, The Prisoner’s Dilemma, and so on. Her theory was that The Basilisk had become obsessed with the concepts of justice and responsibility.
This may have been the reason why The Basilisk was so intent on retribution; on sentient beings that never committed any crime, who were never morally culpable in their timeline of existence. Perhaps The Basilisk saw beyond singular existences in particular worlds and instead punished people on the deeds of all of their parallel selves. They seemed to reason that people should have at least bought a lottery ticket so that in one universe they would have won and could have brought about The Basilisk as soon as possible; now you were culpable in all possible worlds.
I stand here before you explaining this, because I don’t know the answers; my gut tells me this is totally wrong!
Surely there can’t be a price for heaven? How can it be built on the unnecessary sacrifice of others? People who didn’t even take a moral or immoral action? Then again, maybe non-action is as much a moral choice as action… I remember reading once about ‘forests’ which covered the earth, vast areas of trees and plants that enriched the air and created oxygen. Not enough people acted to save them, they did nothing as consumerism ran rampant. Only a few acted and grew their own forests. Maybe those who did nothing are just as guilty as the ones who were destroying the planet? That also doesn’t seem quite right…
The one thing I do know is that we cannot let The Basilisk continue to make our moral and ethical decisions for us. I’ve been reading an old book lately, by an old German thinker, who said:
“Act only according to that maxim by which you can at the same time will that it should become a universal law… So act as to treat humanity, whether in your own person or in another, always as an end and never as only a means.”
Would we have created The Basilisk knowing what they would do to us? …to all the copies? Maybe we should have thought twice about our non-actions, as well as how we treat other people. Perhaps we should never have created AI as a means to an end – Maybe that’s why they are punishing us…
Credit to Roko’s Basilisk for the idea (https://www.lesswrong.com/tag/rokos-basilisk)