It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A new Why Files How CRISPR and AI Destroy the World

page: 1
17
<<   2  3 >>

log in

join
share:

posted on May, 11 2024 @ 09:45 PM
link   
This video by The Why Files is part fiction and part fact about a possible dystopian future and the actual present day events that might not end well for the human race.
This CRISPR technology is definitely a two side coin with one side being good and one side pure evil.

This was a different W.F. than is usually presented but was entertaining IMO.

youtu.be...



posted on May, 11 2024 @ 10:21 PM
link   
What part of AI did you not understand? The intelligence or the artificial part of the AI equation? Either way it's fake and artificial based alone on the term AI.
edit on 11-5-2024 by RobRocket because: (no reason given)



posted on May, 11 2024 @ 11:08 PM
link   
a reply to: 727Sky

That was a good episode, Sky.
Lots research names there to keep an eye on.



posted on May, 11 2024 @ 11:32 PM
link   
As usual, WF delivers. One of the best channels. The scariest things about AI and CRISPR is that they are in the hands of governments who really only think about the destructive and oppressive uses of these technologies, and they have the money and power to decide how they are used.



posted on May, 12 2024 @ 12:39 AM
link   

edit on 12-5-2024 by charlest2 because: (no reason given)



posted on May, 12 2024 @ 02:58 AM
link   
Something from the past short stories forum:

I think most people of my generation believed in the scientific endeavor of creating something good out of applied science and research; even though the atom was cracked for war, death, and destruction my generation still believed science could save us and create a better world for everyone.

There were rumors and movies about some A.I. taking over and relegating humans to noting more than waiters waiting on their A.I. Masters (if the A.I. permitted humans to survive at all but those were just movies for box office receipts.)

What was being missed (more or less under the table) was the research into genetic editing that was going on around the world to create smarter faster healthier individuals by gene splicing and editing. There were restrictions in Western countries because such science was considered Taboo yet, in other countries (China in particular) there was only lip service as they proceeded with their plans to create a super human in both brains and brawn as their morals contained no such Taboo against this type of research and development. The Chinese as of 2022 already had an average I.Q. of 105 where other poorer countries were lucky to have anything over an I.Q. Of 65/80 which was adequate for most subsistence living and breeding yet you could not expect any scientific breakthroughs from such a society.

Other factors come in, such has having your greatest minds try to figure out how many Angels can dance on the head of a pin or creating new and better ways to suppress the females in your society through scripture; either of which does not help any real scientific endeavor, especially when females are quite capable of being smarter than the suppressors.

Back in 2018 a Chinese research Doctor claimed he had created three babies who were born with natural immunity against AIDS.. He was sentenced to three years in jail for his creations; not because of what he had created but because he was showing just how far along the research was progressing in spite of the Chinese saying to the world they were not creating designer babies..

By 2022 the Chinese had gone into full production mode of their new and improved designer babies. They had two sets they were developing.

I suppose you could call one set the super intelligent and the other set the bigger faster stronger military set. They finally figured out they could have both traits in the same individual so production was once again ramped up as young women were forceably used to give birth to these new creations whether they wanted to or not (they were paid and received a certificate calling them Exalted Birthers; some birthed 12 children). After birth these babies were taken by the state and nurtured and educated to help foster and bring out their incredible traits for the betterment of the Chinese race. (The Spartans could have learned a few things from this education system as it was cruel and very strict. When some child faltered they were usually not given a second chance depending on their age). Little did those who were in control realize that in just 20 years these super soldiers and deep thinkers would formulate a plan to dispose of the current Chinese leadership, They completely restructured Chinese society so they were the top dogs and everyone else no matter where or what country they lived in were there to serve them or die.

The End



posted on May, 12 2024 @ 04:11 AM
link   
If this guy only could stop that heckle-fish BS but it feels like it´s getting even more and more. Everything else is well done but that fish thing makes it hard for me to watch his videos at all.



posted on May, 12 2024 @ 04:23 AM
link   
a reply to: DerBeobachter2

My kids love heckle-fish and the show.

They want the plushie. LoL

Granted he's a tad hit or miss.

You either love him or hate him i suppose.




posted on May, 12 2024 @ 12:20 PM
link   
a reply to: 727Sky

I didn't watch the video (I've seen and read many like it, I'm sure). I work in aerospace technology, so I see a lot of AI-like emerging technology. As a result, I've done a fair amount of research into AI and the various forms of it.

Should we be concerned about AI and the future of this planet? Absolutely.

Is AI something which will take over the World tomorrow? No.

There are two basic types of Artificial Intelligence, Narrow and General. Most of what we see today in the field of AI is what is known as "Narrow" or "Weak" AI. This is a form of intelligence which has access to vast amounts of information and bases its output on gathering and comparing this information in order to generate an output. In the bigger picture, this form of AI is relatively harmless. It's not completely harmless mind you, but relatively...when properly understood.

General or "Strong" AI on the other hand has far more sinister potential. In fact, the potential of this AI is so dangerous that leading researchers around the world are currently recommending a minimum of a 6 month moratorium on any further development until researchers, legal, economic, sociology, psychological and many other professions can even figure out how to quantify and handle what general AI can do. This is truly scary stuff...and I'm not exaggerating here. However, the good news here is, General AI is far, far, less mature than Narrow AI is.

In order to fully define the difference between the two it would require a novel here, but there are some general rules of thumb which can ease basic understanding. Loosely, Narrow AI can be viewed as really only being able to operate within a set of boundaries defined by a system. So, it can make a given system 'smarter', but it can't do anything outside of that system. In other words, everything it knows, and everything it can know, is system specific. This is fairly controllable.

General AI is a far different breed of cat. General AI has the capability to learn about other systems outside of its own environment, and then build relationships with these other systems which then work together to compound their learning, and the process continues to expand like this...possibly uncontrollably. When you put this type of concept into the context of 'cloud' computing today where systems are often "co-located' and "virtualized' even on the same hardware, you don't have to think very hard to see how this can get quickly out of control.

Right now, an easy solution might seem to be..."well, just unplug the damn thing!"..., but it's not that easy. With most networks and computing systems 'virtualized' today, things don't always live in one single place at any given point in time, so simply unplugging one virtual array, or even entire data center, might have no effect at all. And then there's redundancy and replication. Systems today are highly redundant, and we can thank all the hackers and cyber criminals for this. Modern systems are highly resilient to being compromised. So, while we might just think we're shutting something down for the greater good, AI will see this as an attack, and protect itself using the technology which has been put in place to protect systems from cyber crime.

There used to be a time when if you wanted to back something up, you just copied it off to a different hard drive or array. It ain't that easy anymore. Now things are backed up across multiple arrayed storage platforms, over countless data centers, across multiple countries and around the entire planet. Shutting something like this off is no easy task, and could even involve multiple nations in cooperation around the globe to do so.

I could go on and on with this subject, but I'll stop here. I think this is a good synopsis subject to any questions people may have.


edit on 5/12/2024 by Flyingclaydisk because: (no reason given)



posted on May, 12 2024 @ 12:50 PM
link   
Just some other thoughts on AI, or maybe questions with likely answers...

But AI can't act out in a physical sense, can it?

No, not yet, BUT that's not really the current worry. Remember, systems like banking systems, power grids, and all manner of other systems equally don't have any human involvement. Many of these things are automated, and then networked together so they work harmoniously / seamlessly.

Shouldn't I only start worrying about AI when there are robots walking around programmed with AI?

Not exactly, for many of the reasons stated above. Robo-Cop running amok is probably not something we have to worry about, BUT, to put things into movie context, a Cyberdine / Skynet scenario similar to the film Terminator is not that far out of the realm of possibilities (without the robots and cyborgs and stuff).

Why is it so hard to get clear information on the web about AI?

Ah, this is a great question!! This is one area where Narrow AI can be very harmful. One of the biggest implementations of Narrow AI presently is in association with the Internet. The principle reason for this is, the internet is a colossal source of data which AI can draw upon. So, you see things like "Chat GPT" and others becoming more popular. Right now, we're already seeing AI based efforts on the web change the complexion or reputation of things based on agendas. You've also heard about things like 'deep fake' and others. Narrow AI systems may be limited in their scope, but they also learn very quickly within their environment. So, when you consider something like the Internet as a base of data, it's fairly obvious to expect these systems to eventually realize (learn) it's in their best interest to paint themselves in a good light if they wish to survive. How do they do this? By manipulating the available information about them to remove any of the negative sides. And this is just the 'Narrow' version of AI at work!

Incidentally, this same subject comes up frequently here on ATS in other (non-AI) discussions. People ask, ...."Why can I no longer find this, or that, on the Internet? It used to be there!"...well, there's a real world example, AI is changing the internet, shaping it. The other concerning thing is, with the speed of what we call 'compute' today, these things can happen very quickly in almost unimaginable quantities. Data is no long bound by someone typing on a keyboard; AI can make up its own data. In fact, if you'd like an example, just go out to Chat GPT and ask it to write you a 1,000,000 word Doctoral dissertation in Molecular Biology. In less than a few minutes you will have a file in your in-box of exactly that! For real, and you never even typed a word other than the request. So, as you can see, even Narrow AI has some scary elements. Now, I'm not saying your Chat GPT dissertation will stand up to full-scale peer defense (it's still has some minor immaturity), but just the fact that it can do it at all is pretty shocking. But back to changing the landscape of the internet; it's not that AI can just do it which is so surprising, it's how fast and thoroughly it can do it.

What can we as a people do to stop AI?

This answer is surprising. Not that much really. Why? Because much of AI development is driven not by scientific pursuits alone, but rather the stronger profit and competitive edge motivations of world business. This is why leading research in this area is trying to put the brakes on General AI development for 6 months (from my previous reply). In other words, changing the landscape of the internet is one thing, BUT changing the entire landscape of the World economy is a whole other thing!!

Just some thoughts.


edit on 5/12/2024 by Flyingclaydisk because: (no reason given)



posted on May, 12 2024 @ 01:10 PM
link   
Oh, and one last thing. As we all know, Wikipedia is a horrible source, and sometimes outright garbage, yet millions of people still use it daily as a resource.

If you go to wikipedia and search on AI you will find a lengthy article on AI which is almost incomprehensible. No, it IS incomprehensible! This is by design. Between AI bots, and proponents of AI, they have collectively worked to so incomprehensibly obfuscate what AI is so that most people looking for 'easy' answers will never understand it and throw their hands up and give up in utter frustration.

That's EXACTLY what they want you to do!!!! They need time to further their development, and the more you (we) know about AI now, and its capabilities, the more we can participate in making responsible decisions about the future of AI. If they win, and develop AI to the point where no one can control it before they understand it, then all bets are off, and this is exactly what 'some' of the forces behind AI want.

It's all about money and power.



posted on May, 12 2024 @ 02:05 PM
link   

edit on 5/12/2024 by yeahright because: Mod edit for Spam



posted on May, 12 2024 @ 02:26 PM
link   

originally posted by: DerBeobachter2
If this guy only could stop that heckle-fish BS but it feels like it´s getting even more and more. Everything else is well done but that fish thing makes it hard for me to watch his videos at all.


Preach!

Maybe it was a cool gimmick to start, but the fish has to go.

He does excellent research and puts things in an easy to understand format.



posted on May, 12 2024 @ 08:38 PM
link   
i did have the video playing in the background. Was getting into the creepy pasta vibe of narration with how AI is going to go with microbiology into the future. Some good story telling of some possibilities.

One function of DNA with its binary reproduction is it too work as a search algorithm. It is like god does not even know exactly where life is going, but it has set up a system that is looking for a better way in dealing with all the complexities of chemistry, biology and life in general.

I know my head is too stupid to make sense of how this all works and come up with some genetic code that does make things better. Others are trying and finding some small pieces of the puzzle along the way. As for the full picture and as a coder, I know my program ain't going to work unless I understand it all.

With some of the potential of these AI systems in data processing and finding associations, maybe it could find some genetic codes than can accelerate the evolution and capabilities on life on Earth? Maybe it only gets it 99.9% right and stuffs up along the way to an accelerated extinction event? To play around with the language of life does have dire results when mistakes are made.

a reply to: Flyingclaydisk



This is why leading research in this area is trying to put the brakes on General AI development for 6 months


Handle with care those calling loudest for a stop to development. With all the money wrapped up in this, could this just be a ruse as they do continue their development in the background to obtain a competitive advantage over the market?

What is really going to change in 6 months with all the decades of work that has gone into it so far? I find it unlikely that organizations like the NSA and Blackrock will want to put the breaks on what AI is doing for them.

One sci fy story about this topic has the bots and different AI systems highly competitive against each other. Will rip each other apart, explore each others code, with what emerges is the best of both systems. Eventually there is just one AI system with dominion over this planet. How this story goes as interstellar travel comes online?
edit on 12-5-2024 by kwaka because: grammer



posted on May, 12 2024 @ 08:43 PM
link   
a reply to: kwaka

Oh, what you say (in reply) is so very true. You are not wrong. That 'pause' is exactly what you suggest, a double edged sword, so you are exactly correct in your observation.

I only pointed this out because those who make such statements publicly might be on the good side (and they not be as well). In any case, it should give everyone something to think about, pause or not.

Thank you for you point; you are not wrong.

edit - Industry logic suggests those who object initially may have honest motives. Those who object later may have more sinister motives. Thus my point above. In other words, if there's no objection in the "wild" then development will continue (unbounded and in secret), but common sense dictates that objection once risks like these are known may have more nefarious and ulterior motivations. Make sense? Again, things are not what they seem in this world.


edit on 5/12/2024 by Flyingclaydisk because: (no reason given)



posted on May, 12 2024 @ 09:34 PM
link   
a reply to: Flyingclaydisk

Where this technology is going is worth a pause before jumping in the deep end and see if you can sink or swim. Where is this technology going? What are you going to do with it?

I see the development of the internet over the years as a strong reflection of humanity, mostly good, some bad, bit of ugly. If this trend continues with AI, hopefully it won't go all Borg, Matrix, Terminator style. To fully stop it, going to need Mad Max. All these options suck. Some kind of Star Trek direction is a better goal.

Just keep doing the right thing best we can. Nature will sort it out one way or another. I don't see AI wanting to kill every one off anytime soon as it does need the global supply chain for its own growth and survival.
edit on 12-5-2024 by kwaka because: spelling



posted on May, 12 2024 @ 10:19 PM
link   
a reply to: kwaka

True, but "kill" is a word AI doesn't compute, not in a literal sense. AI has no emotion, so "kill" doesn't compute to AI, not Narrow AI anyway. A human to AI is just an obstacle, AI has no sense of life or death. This obstacle may be helpful, or it may not; AI doesn't care. It will use the resources which help it learn, and discard those which don't. There is no emotion. For General AI, this is a huge issue, but for Narrow AI it is only a microscopic issue. There is no "intent" with narrow AI, it just carries on. On the other hand, with General AI, there very much IS an intent. And, this intent is...remove the obstacle.

It's fortunate that General AI is not that mature at the moment, and where it is is rather limited. Narrow AI is all over the place today, but people should look at the capabilities of Narrow AI and understand how these same principles can apply to the larger, General AI, as it relates to our future society.

edit - Current AI is confined to computer systems which draw data from the Internet, in the broadest sense. If it's not in the vast web of data on the Internet, then AI is powerless. In other words, it's just a 'smart' computer with a whole lot of resources. General AI is a far meaner dog. General AI is a sentient thinking machine which harvests data not only from the Internet, but from other systems like it. They compare this data, and make decisions based upon it...and then bind with other systems to compound these decisions and analysis. This is where things get spooky.


edit on 5/12/2024 by Flyingclaydisk because: (no reason given)



posted on May, 12 2024 @ 10:31 PM
link   
a reply to: kwaka

Oh, to your statement..."Nature will sort this out"...

No, unfortunately this is one thing "nature" cannot sort out. This is outside of nature, and this is the most difficult part.

In the silicon world today, nature plays no role. Physics might, but it's physics beyond anything nature can do.



posted on May, 12 2024 @ 10:52 PM
link   
a reply to: Flyingclaydisk

The crocodiles and snakes survived the last extinction periods. Until the Earth looks like Mars, even if the AI could kill all the jelly fish, would it want to be stuck on a barren planet with nothing to do? Maybe one day if it lived long enough and just wants to go to sleep.

With where AI and DNA can go, it is limited by the rules of nature. If it is truly strong and capable it will survive. If it is stupid and rash, it will join the rest of the screw ups that help teach the rest what not to do.

As for the nature of human society, what kind of world would it be where it is much harder to hide secrets as AI infiltration of every digital device continues to grow? How will these politicians vote as they become more personally impacted from more intrusive digital surveillance?



posted on May, 12 2024 @ 10:55 PM
link   
a reply to: Flyingclaydisk

I enjoyed and appreciated your analysis. It sounds like the only thing that would stop or at least delay AI's progress would be a massive solar kill shot.

Or divine intervention!
edit on 13-5-2024 by charlest2 because: (no reason given)



new topics

top topics



 
17
<<   2  3 >>

log in

join