It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Will Computers Outsmart Humans In 40 years?

page: 3
0
<< 1  2    4 >>

log in

join
share:

posted on Nov, 1 2005 @ 12:07 PM
link   
AI can be as smart as a human, maybe even more, because it has the capability of calculatingthe outcome of extremely difficult situations based on the data provided to calculate it. So really, AI is only as smart as the data it has to calculate. translation: It depends



posted on Nov, 1 2005 @ 04:02 PM
link   

Originally posted by Linux
According to the author of this site, it will take 40 years for computers to outsmart humans. The author attributes the distant estimations to exponential thinking. His personal theory states that since the rate of technology is advancing from thousands of years to short bursts in decades, that computers will become more intelligent then humans in the near future. I think it's a good rough estimate because we are obviously dawning on new technological eras. Even people who are not in the field of technology realize that computers will soon become more powerful - as they do every year.

bit.csc.lsu.edu...



Similarly, you can see the same exponential growth in virtually everything that has to do with technology: transportation (from legs, to horses, to carriages, to sail ships, to steam trains, to steam cars, to gas cars, to airplanes, to helicopters, to spaceships, to "smart" cars - as you see every step taking less and less time), dentistry (from knocking out that aching tooth, to drilling, to anesthetics, to prosthetics, to preventive dental hygienists, to X-rays, to ultrasound whitening), archeology (from "hey, here's an old cup" to satellite site detection - I skipped the intermediate steps for brevity), clothing (from animal furs to "smart" clothes that can transmit a person's vital signals such as heart rate and temperature), window-making (from a hole on the wall to high-tech windows featuring glass with power-adjusted transparencies), etc.


What are other opinions on this guys estimation? Is there some serious flaw in the logics of his arguement? Any input is welcome i'd love to hear other people's thoughts on this.


[edit on 30-10-2005 by Linux]


Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.



posted on Nov, 1 2005 @ 04:04 PM
link   

Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.


With binary based silicon technology ... but don't say it's impossible outright as there are several alternatives that will bring computer technology on par with the human brain. Biotech may eventually realize that goal first before Nanotech.



posted on Nov, 1 2005 @ 04:16 PM
link   

Originally posted by sardion2000

Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.


With binary based silicon technology ... but don't say it's impossible outright as there are several alternatives that will bring computer technology on par with the human brain. Biotech may eventually realize that goal first before Nanotech.


Well, with our level of techonology, what I said is valid. If we find something extremely different and many orders of magnitude better, then the possibilities are open.

But let's not forget programming languages. I am a programmer, and I know. I can tell you that no matter what progress has been done in the hardware, programming languages are stuck in the 70s. They are very primitive, unreliable, slow, open to exploitations, easy to confuse etc etc. With this kind of programming languages, we will never reach any true AI.



posted on Nov, 1 2005 @ 04:20 PM
link   
Computers can never outsmart humans because

A) they can only be as smart as the humans who programmed it

B) They can not show love or any kind of emotion

C) they don't learn from their mistakes

and plus they keep crashing all the time
. Keeping it real folks



posted on Nov, 1 2005 @ 04:23 PM
link   


A) they can only be as smart as the humans who programmed it


What if 100 humans work on one AI can the whole exceed the sum of it's individual parts is the quesion and it's still a very open question.



B) They can not show love or any kind of emotion


Pure speculation, Emotions are just chemical responses to neural stimuli.



C) they don't learn from their mistakes


Actually that is incorrect, there are many Games and Search algorithims that use AI programming that learns from experience and mistakes. Play a modern day FPS for proof of that, Quake 4 has some of the best AI i have ever seen to date.




Well, with our level of techonology, what I said is valid.


You see the thread title? Why would you assume we would have the same technology in 40 years considering Moore's law and all ....




With this kind of programming languages, we will never reach any true AI.


Self-learning neural nets.

Anyway I don't think they will outsmart humanity ever. I do believe they will outsmart presant day humans eventually but I don't they will even pass us irretrievably.

[edit on 1-11-2005 by sardion2000]

[edit on 1-11-2005 by sardion2000]



posted on Nov, 1 2005 @ 04:25 PM
link   

Originally posted by sardion2000



C) they don't learn from their mistakes


Actually that is incorrect, there are many Games and Search algorithims that use AI programming that learns from experience and mistakes. Play a modern day FPS for proof of that, Quake 4 has some of the best AI i have ever seen to date.


i am sorry for that then



posted on Nov, 1 2005 @ 04:29 PM
link   
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??



posted on Nov, 1 2005 @ 04:32 PM
link   

Originally posted by spartan433
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??


Why would a machine that is self-aware automatically want to kill anyone?



posted on Nov, 1 2005 @ 04:37 PM
link   

Originally posted by sardion2000

Originally posted by spartan433
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??


Why would a machine that is self-aware automatically want to kill anyone?


another good point my friend. Why does everyone asume it will be a domesday situation like Terminator 3 or something like that.

Maybe it will help us advance more than we could imagine. No-one can know until it the day it may or may not happen



posted on Nov, 1 2005 @ 04:54 PM
link   

Originally posted by spartan433
Computers can never outsmart humans because

A) they can only be as smart as the humans who programmed it


Intelligence is a byproduct of capacity, and not a function of itself. Therefore, with bigger capacity than the brain, AI can outsmart humans.



B) They can not show love or any kind of emotion


Emotions have nothing to do with intelligence. Emotions are chemicals that drive us to flee or stay, increasing our chances of survival.



C) they don't learn from their mistakes


They sure do learn. There are already mini-brains in simulated neural nets that they can learn from experience. But they are slow.


[edit on 1-11-2005 by masterp]



posted on Nov, 1 2005 @ 04:59 PM
link   


You see the thread title? Why would you assume we would have the same technology in 40 years considering Moore's law and all ....



Because the Moore's law (which is not a law at all, just an observation) will stop being valid in a few years, when transistors can not be made smaller than a few nanometers.

We need a totally different kind of technology, which we don't have any clue of right now.



Self-learning neural nets.


Self-learing neural nets are not programming languages. We need ways to program those ultra-fast brains to make the calculations we want.



posted on Nov, 1 2005 @ 05:01 PM
link   

Originally posted by spartan433
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??


Why should a computer want to kill someone? a computer does not have emotions, because it does not need to survive in mother nature. Emotions are there to make us survive; that's the only purpose. A computer would never feel threatened by anything.



posted on Nov, 1 2005 @ 05:03 PM
link   

Originally posted by masterp
Self-learing neural nets are not programming languages. We need ways to program those ultra-fast brains to make the calculations we want.


What I was getting at is that maybe we can somehow get it to develop the language itself somehow. I am by no means an expert but to me that sounds like a possible avenue.



posted on Nov, 1 2005 @ 05:04 PM
link   

Originally posted by sardion2000

Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.


With binary based silicon technology ... but don't say it's impossible outright as there are several alternatives that will bring computer technology on par with the human brain. Biotech may eventually realize that goal first before Nanotech.



the fact of the matter is we don't need to create a PC with nearly as many connections, because a computer can do more calculations/send info a second than a synapse in the human brain can do.. the brain is considered quite slow in that respect, only doing approx. 200 calculations a second in each synapse. it is only the raw amount of connections it has that give it its power.
A computer on the other hand is capable of doing many times that in calculations per second.

This is why neural nets will become more powerful than the human brain.

I dont know why , but i keep getting the impression people arent reading the posts before them, including mine, where i've already reiterated this point a few times.



posted on Nov, 1 2005 @ 09:29 PM
link   
That statement was also flawed in that you can't compare the interconnects between neurons with the network connects between cluster nodes.
It would be better to compare them with the internal connects and cache reads in a CPU.

The current highest CPU's are close to and when overclocked well over 20Gflops theoretical potential.

To calculate this with current CPU's both AMD and Intel is 4 FLOPS (2 ADD+2 MUL single precision operations) per clock cycle for single core, 8 FLOPS (4 ADD+4 MUL single precision operations) for Dual Core.

This you should multiply by the clockspeed of the CPU.

An AMD Athlon 64 X2 4800+ runs at 2400Mhz

So to get the theoretical GFLOPS potential for that CPU, you get 2.4Ghzx8 = 19.2 GFLOPS.

The highest non extreme overclocks are around 2.6Ghz. With non extreme I mean without dry-ice/LN2/Phase Change cooling, but just water or good air cooling.

That would give 20.8 GFLOPS already.

Then, Tyan is already selling Dual motherboard versions for Opterons and Athlon FX. Like this, you could get a 40 Glops machines at your own home if you wanted to blow 2000$ on motherboard, CPU's, 700-1000W+ PSU, good cooling and decent memory.

Currently both intel and AMD are at a bit of a halt when it comes to clockspeeds, but with Hyperthreading, SSE, SSE2, SSE3, dual (and soon up to 16 cores and later even more) CPU's, they are boosting the amount of FLOPS a CPU can do per clock cycle.

I think this is a great and long overdue evolution of CPU's. A 16-way cored CPU would be able to do 64 FLOPS per clock, this at 2Ghz would give us a single CPU running 128 GFLOPS or even 153 if you run it at the 2.4Ghz the current fastest Athlon 64 X2 is running at.

The added advantage to this is that memory interfacing between CPU's and shared data can be processed at way higher speeds because the data never needs to leave the cores dye. The biggest limitation in computers today is that the busses connecting all the components in a computer are what actualy slows a computer down.

The last few years we've luckely seen memory, harddrives, IO busses and FSB's soar up to higher clock speeds to finaly catch up with the CPU core speeds, doing as much as 4 bits per clock cycle, so that the slowdowns and bottlenecs we experienced in graphics, audio and networking caused by slow interconnect speeds are slowly fading away.



posted on Nov, 2 2005 @ 01:11 AM
link   

Originally posted by thematrix
To calculate this with current CPU's both AMD and Intel is 4 FLOPS (2 ADD+2 MUL single precision operations) per clock cycle for single core, 8 FLOPS (4 ADD+4 MUL single precision operations) for Dual Core.

This you should multiply by the clockspeed of the CPU.

An AMD Athlon 64 X2 4800+ runs at 2400Mhz

So to get the theoretical GFLOPS potential for that CPU, you get 2.4Ghzx8 = 19.2 GFLOPS.


Wait... let me get this straight... doesn't so the whole 32bit or 64bit core thing effect a processors performance? Do you mean that for both Intel (mostly 32bit) and AMD (64bit) to get the Max flop output just multiply 4 FLOPS per cycle? Doesn't 64bit mean a CPU can access 2x as much info as a 32bit one per cycle? Im confused, so care to explain to how this formula works, or a link would be just as good.

So ... a 32 bit Intel at say 2ghz does (4 FLOPS x 2ghz) a theoretical maximum of 8 GFLOPS. Now an AMD 64 bit at 2ghz also does 8 GFLOPS? That doesn't make sense... shouldn't it do 16GFLOPS?

Also how would I calculate the maximum FLOPS output of a video card's GPU, say a Radeon X800XT. It has a 256bit core at 500mhz and 16 times pipelined (I dont now how many ALUs tho...)

Basically I would love to know how to correctly calculate FLOPS... been looking online but I've never found any formula. I just need to know all the factors that are needed to do the math... I would greatly appreciate a reply.


[edit on 2-11-2005 by beyondSciFi]



posted on Nov, 2 2005 @ 01:36 AM
link   

Originally posted by masterp

Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.


Nature did it, why can't we? Granted the number of interconnections is extremely large, but there is nothing mysterious about them.



posted on Nov, 2 2005 @ 07:16 AM
link   
The lenght of the instructions and registers doesn't change the amount of operations a CPU can do per cycle, although the size of these instructions and 64-bit data are larger and will require the CPU to need more cache to run the same app efficiently compiled for 64-bit then it would on 32-bit. This higher cache requirement could pull down the glops rating for the CPU considerably if the CPU's cache can't keep up providing data to the CPU.

What a larger address space does do, is help get applications to get nearer to the theoretical GFLOPS. Which at bottom line still fully depends on the architecture of the whole computer.

AMD's 64-bit CPU's are a step closer to the theoretical rating by implementing the memory controller for the computer in the chip, instead of on the motherboard, like this cutting out a step thats a major player in a computers global performance, the communication between CPU and memory.

The formula I gave is the theoretical GFLOPS potential, not what a CPU does in real life. The amount of float operations a CPU can do per cycle can be found in the tech specs of the CPU's.

The real life application performance and the GFLOPS score all depend on programming, every application seperatly and every component in the computer seperatly. If you have slow ram and slow harddrives and a bottleneck of a video card, the GFLOPS rating for your computer can drop rock bottom because the CPU has to constantly wait to recieve data from the other components in the computer.

Whats also a major player in this real life performance is what precision the application uses.

As for video cards. Theoreticaly callculating a GFLOPS rating for it can be done with the same methods, find out the float operations the CPU can do per clock cycle and multiply.

For real life, it all depends on what type of graphics app your running(Non HW T&L, HW T&L, PS1, PS1.3, PS1.4, PS2,PS2.5, PS3, VS1, VS2, VS3 and so on), what precision its running and how the architecture alows for data transfers between GPU, GRAM and the AGP/PCI-E bus its in.

For apps the same applies as with graphics too. If the applications are designed to use MMX, 3DNow!, SSE, SSE2, SSE3, Hyperthreading and so on, they'll be able to run closer to the theoretical GFLOPS rating of a CPU then when its unoptimized code.

As to the subject of this thread. Comparing a computer with the human brain can't be done(with current computer architectures) but if you do want to compare them, then do it right.

This means, compare a brain that is processing all its standard audio/visual/taste/smell/touch inputs constantly, runs your body and has delays in interaction between brain, memory and your body with a computer that has ram, harddrive, video card, several busses with devices and extensions, human programmed software and several seperate controller chips controlling nearly every component in the computer.

The main advantage the human brain has over a computer is that its way better optimized then computers. Because it consists of way less components and is a true central processing unit.

Where in a computer the CPU is far from the central processing unit(eventhough thats where its name comes from), because every component, going from soundcard, harddrive, memory, video card to the new physics cards all have their own CPU's that somehow have to work efficiently together with the CPU in the computer.

The reason why gaming consoles are able to pull of so much magic with relativly little power(compared to a full fledged PC) is fully dependant on that the architecture of a game console is extremely optimized and integrated and the software writen for consoles is also extremely optimized to work in the hardware environment of that specific console.

As to where in a PC, applications have to be writen to be compatible with all the posible hardware combination out there wich is achieved by writing software that interfaces with hardware interfacing layers like DirectX, OpenGL, OpenAL, Windows Kernels, Linux Kernels and so on.

When you compile and write software on a linux machine specificaly to only have code to work with the components of that specific machine, the performance gain is considerable. Just compiling Gentoo, for instance, to be optimized with the specific hardware in your PC, can give you a system thats much more responsive and faster then when you just drop a standard linux kernel in the system.

In the world of computers, optimization is all important. Yet for PC's deversity and choise is even more important.

People pritty much have to choose. Do I want to choose what hardware I buy, how much I want to pay for a video card or soundcard or CPU. Or should I be content with buying a global standard system that has all these bells and whistles and is optimized rather well, but is slower at the things I need to do with it then when I could choose to buy a cheap GPU and APU with a monster of a CPU.


To measure real life GFLOPS (billion Floating Point Operations Per Second), you divide the number of floating point operations in a program by the execution time in milliseconds. One problem with this approach is the fact that the number of floating point ops varies from program to program. Two programs, one that’s 80% floating-point ops and one that’s 20% floating-point ops, both of which take the same amount of time to execute, will have different GFLOPS ratings.

Another, even bigger problem with GFLOPS is that the not all the same floating point instructions are implemented on all machines. One machine may use two floating-point ops to perform a particular task, while another machine may use only one. If the task is completed in the same amount of time on both machines, the one that used two ops to do it will have a higher GFLOPS rating.

In short, neither GFLOPS nor MIPS provides a reliable metric for gauging performance. The next time you see a MIPS or GFLOPS rating, notice the source—I’ll 99% guarantee you it’s a vendor. The reason for this is twofold. First, a vendor is the only one who’s really going to put in the time and effort that it takes to count up the instruction mix for a program and do all the other stuff you have to do to assign a MIPS or FLOPS rating. Second, vendors are the only people who benefit from such a rating. Most consumers don’t know enough about a vendor’s architecture to be able to determine which floating-point ops are available on it versus which FP ops are available on competing architectures. Even if a consumer did have this info, the vendor never divulges what program was used for the rating or what the instruction mix for that program was, so it wouldn’t be of any use.




[edit on 2/11/05 by thematrix]



posted on Nov, 3 2005 @ 02:27 AM
link   
Computers will continue to be usefull to us, but unless you inject a soul into the machine, it will not be the same as a human. You may have an analitical wonder, but you won't have a human.

Every moment of our lives is stored in our memory, yet no more physical space is taken up for storage. Has anyone seen anyone come out of years of college with a larger skull? Unless the skull was still growing I doubt it. Something to think about.


Troy



new topics

top topics



 
0
<< 1  2    4 >>

log in

join