It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The Basilisk
Roko's Basilisk rests on a stack of several other propositions, some of dubious robustness.
The core claim is that a hypothetical, but inevitable, ultimate superintelligence may punish those who fail to help it or help create it.
This is not a straightforward "serve the AI or you will go to hell" — the AI and the person punished have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI would punish a simulation of the person, which it would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT),[3] punishment of a copy or simulation of oneself is taken to be punishment of your own actual self, not just someone else very like you. You are supposed to take this as incentive to avoid punishment and help fund the AI. The persuasive force of the AI punishing a simulation of you is not (merely) that you might be the simulation — it is that you are supposed to feel an insult to the future simulation as an insult to your own self now.
Furthermore, the punishment is due those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.
Note that the AI in this setting is not a malicious or evil superintelligence (SkyNet, the Master Control Program, AM, HAL-9000) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.
[edit]