The basilisk is a mythical creature created by the Greeks, described as "a small serpent charged with a deadly poison that can be killed by mere appearance, but if you see it in a mirror you can be petrified". Basilisks are considered the kings of snakes. In today's world, artificial intelligence is becoming more and more relevant. From this comes Roko's basilisk mind game.
In this article, we'll tell you everything you need to know about Roko's basilisk and its characteristics.
Artificial intelligence
Over time, the basilisk was given reptilian features, and by the Middle Ages it became a rooster with four legs, yellow feathers, large spiny wings, a serpent's tail, and a serpent's or rooster's head.
Now, taking this as background, we tell you that Roko's basilisk is a mind game that became very popular in 2010, when a user named Roko wrote about it on Less Wrong, an online forum dedicated to the subject of philosophy and psychology.
In general terms, the experiment presents a scenario, hypothetical of course, in which humans create a powerful artificial intelligence machine to seek the well-being of all humanity. However, the problem arises when the machine realizes that no matter how hard it tries, any kind of good will could make it better.
By his logic, or rather his programming to always do good, he inspires an awareness that he should have been there a long time ago to do greater good. In his desperation, the machine begins to behave like a basilisk and begins to kill everyone who had not made an effort to create it before, since it prevented him from raising his level of goodness.
The most disturbing thing about all this is that anyone who discovered the existence of the basilisk and did not set out to create it immediately became one of its possible victims. That's what author David Auerbach called "Roko's Basilisk" or "the scariest thought experiment ever written."
Roko's Basilisk
The quick and easy version is this: Assuming that at some point there will be a technological singularity, and with it advanced artificial intelligence (called "basilisks"), the next "logical" step. The "smart" step would be to create a simulation in the form of a virtual reality—like, are we living in virtual reality? We're participating now. In other words: we're most likely already living in a simulation. But perhaps the strangest thing about this is the plot twist at the end: The basilisk, as an intelligent entity, is actually omnipotent and can retroactively punish those who didn't contribute to its creation, due to simple logic and practicality: It was supposedly created to "help humanity," but in doing so, it seeks to do so as quickly as possible, so it tends to use any means to accelerate its own creation in that simulated reality.
In practice, this means that those who support the development of AI and contribute their resources, time and effort to the arrival of AI will be rewarded with basilisks (of course "in hindsight", because in reality these people exist as a simulation) On the other hand, those who oppose the advancement of AI technology and are stingy with time or resources will be punished. By the way, this has to do with Newcomb's wonderful paradox about free will.
If it sounds a bit like heaven and hell, a bit religious, a bit like an ontological argument, that's because it is. For example, let us recall Pascal's stake and Anselm of Canterbury's argument for the existence of God.
On the subject at hand, legend has it that some people say that they were personally influenced by Roko's basilisk -especially because of how easy it is to jump from one conclusion to another without obvious logical cracks- and that they have actually survived drowning in the crisis. Honestly, this seems more like an overreaction, especially considering that once you start to analyze it in detail, this falls apart in many ways, and taking it as a logical and psychological joke is actually more fun.
Roko's Basilisk Experiment
Of course, the whole experiment/theory/argument is confusing, questionable and controversial. Since the idea was proposed in 2010 on a philosophical wiki forum called LessWrong (it was posted by a user named Roko, hence the name), people have spent years debating all the arguments on the subject, perspectives, and nuances. There are legends, jokes and variations that circulate. Thousands of pages, articles and quite a few videos have been written about it.
The debate was also clouded by the fact that in the face of apparently "ridiculous" ideas, there were also people who deleted the original messages and only resurfaced them elsewhere, with a half-conspiracy, half-plot, half-WTF vibe: If explains and popularizes Roko's basilisk theory, the "future" basilisks will understand that this is a good way to blackmail present creatures into hastening their existence. Therefore, it is advisable to cover yourself with a thick veil and not speak, so as not to suffer in vain.
Conclusion
The experiment itself assumes whether to believe in a result simply because it will be, one of the arguments in support is that eventually the creation of basilisks is inevitable as technology advances, but the argument against must look to such an advanced does the AI cares about us from the past so much that it bothers to go back in time to punish us just because it realistically thinks it's impossible, does the experiment assume if we believe something that could affect us by accident is real or do we accept it as real?
Again, like Newcomb's paradox, speculating that we don't believe in basilisks out of our own volition, but because they basically tell us that if we don't intend to pay, Roko is banned and mentions them in the forum banned because obviously those with paranoia or others delusions can even be physically affected, think about it.
I hope this information helps you learn more about Roko's basilisk and its characteristics.