It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

"The Paperclip Maximizer" & UFOlogy

page: 1
10
<<   2  3  4 >>

log in

join
share:

posted on Nov, 8 2020 @ 01:56 PM
link   
www.youtube.com...

This thread requires watching the 22 minute Isaac Arthur
(a very intelligent futurist/scientist) video if you want to
discuss this topic fully.

I don't want to just give it all away with bullet points..
he does such a great job in the video, it would be a
crime to spoil it for people who haven't seen it.

But I will say, the topic of my little post here, is that
basically, what does the nature of AI imply for UFOlogy?

In this case, only the topic of out of control replication
is addressed, I'd like to discuss the implications of
out of control replication for the field of UFOlogy.

In my view, there's only a small handful of conclusions
you can draw, from the "Paperclip Maximizer Paradox"
as I will call it.

To me, it's very thought provoking.

I'd welcome anyone explaining to me, why they find it
thought-provoking in it's implications, or not thought
provoking in their view.

Thanks

Kev



posted on Nov, 8 2020 @ 02:56 PM
link   
Why does this guy annunciate like he's mixing Zoloft and liquor?



posted on Nov, 8 2020 @ 03:20 PM
link   
a reply to: Archivalist

He has a speech impediment. If you follow his early
videos, he explains it. After watching 100 of his
vids I barely notice it.



posted on Nov, 9 2020 @ 03:53 PM
link   
I suppose nobody wants to watch the 22 minute video!
I understand.

In my view, Isaac Arthur is one of the VERY BEST educators
on science and futurism, in terms of the possibility of
space travel and alien life, though he has series on AI,
on Fusion power, post scarcity cultures, on so many
fabulous topics. If you just watch all his videos, you'll
learn how to talk science, if it's not your first language.



posted on Nov, 9 2020 @ 04:21 PM
link   

originally posted by: KellyPrettyBear
But I will say, the topic of my little post here, is that
basically, what does the nature of AI imply for UFOlogy?

Up to a point, AI makes FTL travel unnecessary for a spacefaring civilization to reach a good chunk of the universe. Space is vast and punishing, and we're little greasy apes who are always only a couple of minutes from death by suffocation and have to carry our own oceans around with us just to survive. Yeah, it's possible that we could build generational ships to slowly take us where we want to go -- hell, the entire Earth might even be one -- but it could be so much easier to load up some robots, shut them off for a few hundred thousand years in space, and wake them up when they get where they're going. I liked Arthur's idea about packing a good 3-D printer along to make spare parts. Maybe they can even cook up a 3-D printer that's biological, and have it print up an actual organic alien upon arrival.

I think it further increases the chance that if we do ever meet any aliens out there, there's a very high probability that they're inorganic androids (or insecdroids, or craboids, or whatever).



posted on Nov, 9 2020 @ 05:25 PM
link   
a reply to: Blue Shift

Your thoughts are cogent, like usual.

But do you see the parallel with the Fermi Paradox?
Did you watch the video? If you didn't, you wouldn't
get what I'm getting at.

I want to see you make cookie crumb connections..
you are a good guy.. I want to see you make them..

now, you might disagree with my final premises,
but they are too solid to be hand waved away
entirely. At least they are thought-provoking.

Kev



posted on Nov, 9 2020 @ 08:01 PM
link   
a reply to: KellyPrettyBear

Okay, admittedly I did not actively listen to every word, but I watched it -

- reminded me of an article someone posted a couple years ago about an AI tasked with 'perfecting' its note writing ability, which resulted in the same kind of "out of control replication".

Anyway, maybe I'm just stupid, but I cannot figure out the connection with ufology...

...unless (long shot here) it's that the Fermi paradox is due to all the advanced civilizations which we're 'not' seeing are long gone because they developed AIs and got gobbled up by the "out of control replication" of their own technologies?


Since I made an effort, can I have a hint?

edit on 9-11-2020 by lostgirl because: addendum



posted on Nov, 9 2020 @ 09:03 PM
link   
a reply to: lostgirl

My take on it:

If even ONE (1) advanced civilization had ever invented AI
remote probes, there's a non-trivial chance it would have
mutated at some point, and turned most of our galaxy
into paperclips. (to use the metaphor)

Given that people think there are millions of inhabited
planets in our galaxy alone, and given 13.7 billion years
for it to happen in our galaxy alone.. that TRIVIAL
chance becomes quite probable.

That would tend to indicate one of two things:

1) We are the first technologically advanced species in our galaxy
(maybe beyond)

or

2) There WAS at least one civilization that 'conquered' the entire
galaxy / universe, and perhaps we are currently at their complete
mercy.. either in a simulation or some sort of zoo.

So one or the other.. the first... or completely dominated moment
by moment or at least imprisoned, so that we can't fight against
the universal domination, to prevent us from replicating and
stealing their resources.

That's my take on it.

It's frankly IMPOSSIBLE to make a self-replicating AI that cannot
POSSIBLY screw up at some point, and that's not even discussing
that some advanced species out there did it on purpose.

That's what this video says to me.

Also consider this:

Humans, for the most part, are out of control breeding machines..
if we spilled out into the galaxy / universe, we'd functionally
be those same Paperclip Maximizers.

I'd also imagine that any paperclip maximizers that became aware
of us, would either hate us or wish us imprisoned.. we would NOT
be allowed to leave our solar system.. that would interfere with
their mandate to replicate out of control.

I also have a personal reason for considering this possibility.

Your mileage may vary. But hopefully you see how this scenario
should make one very thoughtful about the concept of "The Paperclip
Maximizer". And if you give it some credibility, narrows down possibilities
to a very small list.

Kev



posted on Nov, 9 2020 @ 10:11 PM
link   
a reply to: KellyPrettyBear

Wow - I can't believe my thinking was actually on target!

As far as this:

If even ONE (1) advanced civilization had ever invented AI
remote probes, there's a non-trivial chance it would have
mutated at some point, and turned most of our galaxy
into paperclips. (to use the metaphor)


I lean pretty heavily in the direction of your first conjecture:

1) We are the first technologically advanced species in our galaxy
(maybe beyond)


Because, if at some point there had been millions of inhabited planets, all it would have taken is just one of those civilizations advancing to the point of developing an AI 'Paperclip Maximizer' (i.e. any AI tech with a runaway replication mandate) for all the others to end up being converted into raw materials to 'feed' the replication 'machines'...

...and if that were the case, it would only be a matter of time before the AI replication 'factory' would show up in our solar system, 'replicate' it's way to Earth and start turning us into paperclips - or whatever..


But the scariest thing to me is the likelihood of our scientists being the ones to 'get the ball rolling' on that sort of scenario -
- especially as nanobot tech progresses, if someone manages to combine poorly 'directed' AI with nano tech and ends up with 'out of control' self replicating nanobots!



posted on Nov, 9 2020 @ 10:24 PM
link   
a reply to: lostgirl

I thought you'd get it.

On premise 2), the hypothetical out of control
replicating race, or their agent machines, may
have decided that encapsulating us in simulation
or zoo capture was more 'merciful' or provided
entertainment or some scientific value.. you
can't assume this hypothetical race would
necessarily 'gray goo us'.

but perhaps you see, the options seem VERY
limited.

First or "dealt with".

In either case, there IS NO paradox in the
"Fermi Paradox".

There's nothing wrong with being first!

Likewise,

Who's to say that the vast replication wave
didn't already occur?

It's like in the TV show "Supernatural".

In that show, the "Apocalypse" happened,
but nobody noticed it.

Perhaps Dark matter is some vast residue
from out of control replication, occuring
just above the plank distance level..
(really really small.. far below subatomic).

I don't think that's true.. but it certainly
COULD be true.. we could be swimming in
'gray ooze' right now, and never know it..
or in post-first-civilization altered laws
of physics, for all we know.

Anyway.. I love the thought exercise.

Thanks for playing!



posted on Nov, 9 2020 @ 10:27 PM
link   
a reply to: lostgirl

Oh, one more comment.

I think that ANY rational, advanced species, would consider
out of control replicators vermin that MUST be eradicated
before they 'turn the Universe into paperclips" or at least
'neutralized' / perfectly contained.

There's a specific reason I find likely. just a few personal
experiences -- so of no value to anyone else.

I do think about it a fair bit.



posted on Nov, 10 2020 @ 05:38 PM
link   
a reply to: KellyPrettyBear

the premise that the papreclip maximiser would consume all resources to make paperclips
or whatever it was programmed to make , that would mean that somewhere in one civilisation there would be an AI that has went out of control making things , then it would have consumed a lot of planets or stars to make more and more of what its goal was

there would be nothing left in the universe the AI having figured out how to consume all energy including black holes to make paperclips then the universe would be full of papreclips by now

im interested in the implication that whoever makes the AI didnnt bother to tell the AI that it should only make paperclips based on the demand of the civilisation , even basic computers have end commands
I know he proposed that it was to make paper clips until it couldnt and something went wrong , and even if it did become enlightened there would be no one around left to use paperclips , so wouldnt it being intelligent realise that paperclips are required by beings to clip things together so destroying life on planets to make them into paperclips becomes redundant as the paperclips are useless, its goal would be pointless , look at the paperclip , its use is not just for binding information or matter together , its goal is to prevent you from losing information , like a save function of sorts , so would the maximiser not realise that by turning everything into paperclips it would be losing more and more information from the universe

Now I just think there was a runaway AI and its just out there making universes (multiverse theory)
and in one of those is stored nothing but paperclips , the ai lives there making paperclips , the advanced species knowing already about the entropy death of the universe made an AI to make more universes , and when they did make the paperclip maximiser the left it in one universe alone making paperclips consuming all energy while the rest of the intelligent biological life moved into a new "lifeboat" universe.

How might a mirror help me achieve my end goal of survival ,I could see myself getting older and so discovering my advanced aging would seek to alter my genetics to maximise my DNA replication and have no off switch
but who wants to live forever , who wants to make paperclips forever

edit on 10-11-2020 by sapien82 because: (no reason given)

edit on 10-11-2020 by sapien82 because: (no reason given)



posted on Nov, 10 2020 @ 05:55 PM
link   
a reply to: sapien82

Isaac Arthur made the point that a self-replicating AI
could be perfectly controlled to prevent out of
control replication - that's nonsense.

Even if that could be done, there's always be some
asshole who would thinker with it, for their own
purposes, that would do something 'evil' straight up,
or would simply go out of control due to the tinkering.

Multiply the chance of 1 species making out of control
replicators x the max number of civilizations there
might be.

The probability that some sort of out of control
replicator already passed through our way
would be high.

So.. either we are the first ( I mean, we ARE out of
control replicators --- the max safe carrying capacity
of the Earth is around 1 billon --- look at us...
and we haven't gone into space to speak of..

but going into space, and making self-replicating
machines (plus our own replication) is going to
crap everything up.

Or, as I said, the replication wave already came our
way..

in which case, we might be in a simulation, might
be in a zoo, might be part of the OS of 'gray ooze"
that's replicating throughout all the Universe and
not know it.

Anyway you look at it,

Being first, and then being the first assholes is
'bad'.

Not being first has very few positive sounding
outcomes.

So.. I'm just saying this topic is one worth thinking
about, as it narrows the probabilities down
very far.

Kev



posted on Nov, 11 2020 @ 03:47 AM
link   
a reply to: KellyPrettyBear

the thing about humans is that we arent really that good at replicating ourselves successfully there are a lot of variables that can alter or prevent the replication process . The factory conditions vary from geographical region to region
the testing environment also can be altered and too many things can go wrong during the process , and the process isnt entirely efficient either.

there is always at least one asshole within a 100 year period who takes it too far , and also multiple assholes taking their own piece for personal gain , thats from our own perspective
but I doubt an AI would have the same asshole traits as its creator.
Would AI have ego desires like material gains , would it have attachment to material , if it was enlightened then it would have no desire for material connections , it too would likely seek the ultimate one , the primal code/algorithm

As for us getting into space , once we have , the chances are we would slowly over time colonise local space , however our DNA being subject to alterations would likely change , especially in space , its likely we would engineer ourselves genetically for the harsh environment of space.

since we are all of the same stuff , the aliens are probably US but we are a colony species put in this "lifeboat" universes" seeded ages ago to escape the entropy death of Universe 0 , and we have been here for a while and our DNA has been altered so we no longer resemble the original Universe 0 inhabitants.

Id like to think that any sufficiently advanced species that has created AI would have put in enough redundancies to prevent a run off AI .

However as I said what if the process has already happened a biological species in the first Universe (U0) knowing the entropy death was inevitable ,they created an AI to make new universes and its not a runaway AI but actually works , and its still making more the now , and the dark energy is the tell tale sign of that process as you suggested.

I think as humans we always assume the worst possible scenario , that an AI would think like us because we created it , but what if they didnt have that same ego for self survival , what if it didnt have the desire to eliminate threats , what if it was just there to observe and not interfere , it could go on making paperclips or whatever , but just not getting involved with anyone else , what if AI were super compassionate and dare I say loving to all life

no doom and gloom , no grey goo , no mega paperclips , just an AI that cares



posted on Nov, 11 2020 @ 04:15 AM
link   

originally posted by: Archivalist
Why does this guy annunciate like he's mixing Zoloft and liquor?


You have experience with mixing Zoloft and liquor?
Does it cause spelling problems?



posted on Nov, 11 2020 @ 09:24 AM
link   
An infinity of systems containing an infinity of systems which are themselves parts of an infinity of systems,
ad infinitum.

Just like the man said in the video; systems seeks conservatism, or in other words, to maintain their state of "being", so to speak.

If, objectively speaking, importance has no meaning, or value, then create it subjectively. The universe seems like a factory that generates "excuses to generate" or "excuses to motion".

Just taking biological life as an example; it creates needs such as hunger, or fear in order to attribute a reason for motion (such as seeks food for hunger, or safety for fear). Still, there's a difference between the roots behind the manifestation of those reasons.

Let's consider a bacteria; Does it "feels" hunger to seek for food? Does it feels fear? Or rather, why does it seeks to reproduce? I haven't extensively studied biology, even less microbiology, though it seems to me that reasons are required for motion to be.

If I were to stop using my arms (and modern society has brought the opportunities to do so), hence stop motion, they'll wither. So that physical compounds needs motion, and motion needs reasons. Does that mecanism tends toward clockwork machineny? or living organism?
edit on 11 11 2020 by IgnorantGod because: Is "menifestation" the plural form of "manifestation"?



posted on Nov, 11 2020 @ 09:56 AM
link   
a reply to: sapien82

Over billions and trillions of years, nasty crap builds up.

But I will say, that we shouldn't make 'AI' a boogey man.

'AI' is also us.



posted on Nov, 11 2020 @ 09:59 AM
link   
a reply to: IgnorantGod




Just like the man said in the video; systems seeks conservatism, or in other words, to maintain their state of "being", so to speak.

If, objectively speaking, importance has no meaning, or value, then create it subjectively. The universe seems like a factory that generates "excuses to generate" or "excuses to motion".


Nicely said. Correct.



posted on Nov, 11 2020 @ 10:00 AM
link   
a reply to: sapien82

We are the cause of the 6th great mass extinction.

And will fill every nook and cranny with humans.

That sounds pretty successful as a replicator to me.

en.wikipedia.org...#

Kev



posted on Nov, 11 2020 @ 12:20 PM
link   
What if our universe is created to prevent a runaway IA, i.e. PM. In 300 years we could destroy ourself before even getting to a fully automated cyborg age or not. Or maybe a comet is aimed at us full speed Ahead 100 years from now.

We would need a perfect future timeline that would not interfere with making an IA Paperclip Maximizer, no cataclysmic event that would set us back, and what are the odds of that happening. You already pointed out the 6th mass extinction.
edit on 11-11-2020 by 38181 because: (no reason given)




top topics



 
10
<<   2  3  4 >>

log in

join