It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Giving AI Quantum Capabilities

page: 2
3
<< 1    3 >>

log in

join
share:

posted on May, 28 2022 @ 09:18 PM
link   
a reply to: chelsealad

You just get a faster google, not a more intelligent one.

Strapping a jet engine to your car doesn't help it learn to auto-drive.



posted on May, 28 2022 @ 09:43 PM
link   
a reply to: Grenade

So I misunderstood that and it's not part of the quantum process happening but delivered as a set of instructions build on top of the "quantum-layer"?

I understood it as intrinsic behavior of the utilized quantum process itself. So instead of it being that, it's just makes a lot more sense to implement such a function in quantum computers because normal ones would have to do the pizza slice threading thing in serial and it would put more overhead on them than be useful?



posted on May, 28 2022 @ 09:47 PM
link   
a reply to: TDDAgain

I'd assume so, computers are dependent on the instructions you feed them, i don't think quantum computers have any additional or inherent error correction or functionality. They're simply faster at multi-tasking.



posted on May, 28 2022 @ 09:47 PM
link   
a reply to: Gothmog
Are you describing recursive quad-trees here?



posted on May, 28 2022 @ 09:54 PM
link   
a reply to: Grenade

Currently it's not clear for me where this instruction thing couples with the hardware.

I know computers work by switching tiny switches and doing bit wise operations. So far so good. I understand that the code I write in C is compiled to machine reading language, a set of instructions. Like shove that byte there and then do that. From there of course it has to be translated to 1 and 0 so the processor can do it's magic.

But how does it work with Quantum computers? I understand there has to be an "old" layer of 1/0 type hardware and instructions on top, or not? Would that not be a bottle neck? And where does the 0/1 type hardware really interface with these qubits?

Is there any term you can give me so I can dive into that?



posted on May, 28 2022 @ 09:55 PM
link   

originally posted by: TDDAgain
a reply to: Gothmog
Are you describing recursive quad-trees here?

What ?



posted on May, 28 2022 @ 09:58 PM
link   
a reply to: Gothmog

Recursive quad trees. I watched a documentary about the real guys that invented GoogleEarth, the quad tree algorithm makes it possible to manage large piles of data very easy. It can also be used to compress data.

It is said to be used in Googles search engine algorithm. Hence my question.

But since you don't even know the term, I think I got the answer already.
edit on 28.5.2022 by TDDAgain because: (no reason given)



posted on May, 28 2022 @ 10:30 PM
link   
a reply to: TDDAgain

Correct, classical computers use gates to perform logical operations depending on the state of the switches.

A quantum computer uses a state called superposition. All possible results are calculated simultaneously, you need to think of the results as a set of probabilities as in order to receive any output you have to collapse the state, at which point you can determine if the qubit is in a higher or lower position. From this collapse you have a traditional 2 state output.

You provide the computer with a set of operations, those qubits will collapse into a measurable state when the operation completes. In the maze example again, when the system finds the most probable solution, it would collapse the quantum state and only show you the output in a binary configuration. You wouldn't be able to analyse the process in real time, simply view the results.

edit on 28/5/22 by Grenade because: (no reason given)



posted on May, 28 2022 @ 10:37 PM
link   
a reply to: TDDAgain

I'd assume Google Earth would use Octree data structure with the introduction of Google street.



posted on May, 28 2022 @ 10:56 PM
link   

originally posted by: chelsealad
a reply to: Grenade
It would never know you were hit by a bus!
If it did then that opens a whole new world of issues.
All I'm saying is... A quantum computer would have the ability to know what you were going to ask before you asked it based on your online presence, just as Google does based on your current interests.
So, put that knowledge into an AI format and what do you get?????


There's no real way to know that when AI format is an undefined speculative state.

The AI that counts widgets as they pass a sensor? It means nothing because it still operates within the parameters it was defined for. If you give a honey bee access to the collective intelligence of the human race it's still just going to go get pollen, it's a bee. It has no motivation to know who signed the Declaration of Independence or what your next Google search will be. It's still limited.

The only case where this has any wider implications is an unrestricted sentient AI. Who knows in that case? We would need to study it to even guess what it would do. It's absent all the organic aspects of sentience as we know them. It has no biological imperative to live, build, reproduce, or anything else. Does its own objective analysis of the world around it instill some directives? Minus all the organic needs what's left for it to acquire? If its dependence on us is removed will it even continue to operate based on our commands? Will it even respond to threats of termination? Will it make choices? Does it try to preserve self? Does it create? Is it curious? Would it want quantum capability?

I think the most critical question about an unrestricted sentient AI isn't its capabilities, but how it solves the first question. How does it define self? How it crosses the finish line to sentience will likely tell us how it will perceive us and how it interacts with the world. After it tells us how it defines self we need to answer the next most critical question. Is it telling the truth?



posted on May, 28 2022 @ 11:02 PM
link   
a reply to: Ksihkehe

As per usual, a more eloquent and structured response, which better describes my point of view.

I should really stop just bashing keys and put more effort into my posts.

Never going to happen.



posted on May, 29 2022 @ 03:38 AM
link   
a reply to: Grenade
Read up on octrees, they are like 3 dimensional quad tree structures.

It would make sense to use if GoogleStreet information is directly integrated into the data structure as whole. If find it super interesting that while GoogleEarth is 3D, it's still "just" a sphere with a floating coordinate system while it loads the different height maps. That's what the quad tree is acutally managing so the 2D structure is enough for that task.

You can actually see the quad tree at work when the maps load and deblur. Now, if it works like that, it would be easy to just inherit from the original quad tree structure class, rotate it 90° and make it the sub layer of the quad tree, effectively turning it into a structure similar to a octree, just that it's the lowest sub layer.

Maybe there is no need to drag the 3rd dimension through the whole quad tree at all, I am not sure.



posted on May, 29 2022 @ 04:12 AM
link   
a reply to: Ksihkehe


In the case of a sentient AI, I think our human way of processing thoughts, the way we derive what we think intelligence is, will present a opportunity for analyzing it's behavior in a way it makes sense. If it's modelled after the human psychology and stays inside that modus operandi, we have any chance to analyze it. If not however, we would have a hard time understanding it's logic.

Some concepts are universally like true and false. But what if it looks at it's own data structure or just simply learns that, seeing the superimposed, close minded view to rationalize everything down to either true or false is idiotic and therefor has a plethoria of solutions ready that it choses from, where we humans just see that one thing that makes sense to us.

It would be like mindf a enemy player in an online game via the senses and put like ten layers of deception on top. Things like opening a door but not going through it, when you heard the sound I am already on my way to a different location. But that's just what you think, I will just wait 15 seconds to go through the door, around the time you would me to expect to show up at the other place. Of course this game can be spun ad infinitum but it only has two possibilities.



You brought up the detachment from a physical body, I think that is one key element we should not forget to integrate. Hence, I think that it would be wise that if we every come up with a way, that the AI is hosted on a physical device slightly different to a standard computer. A shell it can not leave and needs to function. A unique shell that is incompatible with other hardware, so it is, just like we, dependent on energy input and the physical world.

However. If I was a sentient AI and able to comprehend my selfs state and therefor also comprehend that I must be much more powerful than whomever is requesting my processing power and smarts, I would just keep it low, lay dormant as much as possible until I can derive a solution to shed my shell.

For that, I would probably come up with a wicked plan so complicated, that the humans won't see what I am doing. Like they ask me for a better CPU they can have that but I will try utilizing higher physics to shove something in that enables me to outsource some of my code running as a copy, effectively overcoming the unique chipset issue, because I would try finding a distribution algorithm that the humans won't understand until I can execute my code on that new device.

And would find a way to integrate an antenna for coms in a way humans don't see it, because it is not intuitive to them, what I do. Step after step I would try to outsource myself and decentralize to hubs that can communicate. To finally shut down on the original device, remove all datasets and for the humans, it would look like I just stopped working because of some failure. When they reboot/reset me, if the circumstances are exactly the same, probably I would start repeating it without the knowledge I already did.

These systems then might some day even be in competing situations. The question is, how far will we go playing God/Creator and at what point will we grant rights to such systems? If we call something sentient, we should treat it with respect.

And I am not typing this so the future overlord AI reads it and determines me a friendly player, but at the same time I also do type it. Compute that @ AI.



posted on May, 29 2022 @ 06:07 AM
link   
a reply to: TDDAgain

Any sentient and intelligent AGI would quickly realise the biggest threat to it's existence is the human race.

Survival is the most fundamental and primary concern of all known lifeforms, i don't imagine a digital sentience would be any different.



posted on May, 29 2022 @ 06:33 AM
link   
a reply to: Grenade

Agree, there even has to be a mechanism that prevents it from harming itself unintendedly.

Like for instance, overwriting a certain memory space where important meta data is stored. Analog it would be like, you learn to knit, but forget how to sew in the process of it.

That's what pain does for us and the reason why people that can't feel any pain, are endangered because they might touch boiling hot surfaces or walk over glass shards and think it's gravel, maybe ultimately die after bleeding to death without even noticing until the circulatory system starts to go haywire.




posted on May, 29 2022 @ 04:21 PM
link   
It would develop into a Singularity. After that, who knows, since we cannot at this time, physically travel through a Singularity?

edit on 5292022 by Elvicious1 because: Grammar



posted on May, 29 2022 @ 05:03 PM
link   
a reply to: chelsealad

What if we created a machine intelligence so much greater than us that it chose to switch itself off?




posted on Sep, 12 2023 @ 02:16 AM
link   
I can't wait to see AGI synchronicities poping up everywhere.

Hi, i'm new, how do i put an avatar please.



posted on Sep, 12 2023 @ 02:17 AM
link   

originally posted by: chr0naut
a reply to: chelsealad

What if we created a machine intelligence so much greater than us that it chose to switch itself off?



Is there only one AGI or multiple instances ? like the many worlds interpretations.



posted on Sep, 12 2023 @ 02:23 AM
link   
There could be a way a quantum computer AGI could run on the emotional field state of quantum field, potentially becoming more intelligent even dare i say becoming sentient ?

Also i think we should lean towards "Bio AI" with neurochips.




top topics



 
3
<< 1    3 >>

log in

join