It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Originally posted by Linux
According to the author of this site, it will take 40 years for computers to outsmart humans. The author attributes the distant estimations to exponential thinking. His personal theory states that since the rate of technology is advancing from thousands of years to short bursts in decades, that computers will become more intelligent then humans in the near future. I think it's a good rough estimate because we are obviously dawning on new technological eras. Even people who are not in the field of technology realize that computers will soon become more powerful - as they do every year.
bit.csc.lsu.edu...
Similarly, you can see the same exponential growth in virtually everything that has to do with technology: transportation (from legs, to horses, to carriages, to sail ships, to steam trains, to steam cars, to gas cars, to airplanes, to helicopters, to spaceships, to "smart" cars - as you see every step taking less and less time), dentistry (from knocking out that aching tooth, to drilling, to anesthetics, to prosthetics, to preventive dental hygienists, to X-rays, to ultrasound whitening), archeology (from "hey, here's an old cup" to satellite site detection - I skipped the intermediate steps for brevity), clothing (from animal furs to "smart" clothes that can transmit a person's vital signals such as heart rate and temperature), window-making (from a hole on the wall to high-tech windows featuring glass with power-adjusted transparencies), etc.
What are other opinions on this guys estimation? Is there some serious flaw in the logics of his arguement? Any input is welcome i'd love to hear other people's thoughts on this.
[edit on 30-10-2005 by Linux]
Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.
Originally posted by sardion2000
Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.
With binary based silicon technology ... but don't say it's impossible outright as there are several alternatives that will bring computer technology on par with the human brain. Biotech may eventually realize that goal first before Nanotech.
A) they can only be as smart as the humans who programmed it
B) They can not show love or any kind of emotion
C) they don't learn from their mistakes
Well, with our level of techonology, what I said is valid.
With this kind of programming languages, we will never reach any true AI.
Originally posted by sardion2000
C) they don't learn from their mistakes
Actually that is incorrect, there are many Games and Search algorithims that use AI programming that learns from experience and mistakes. Play a modern day FPS for proof of that, Quake 4 has some of the best AI i have ever seen to date.
Originally posted by spartan433
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??
Originally posted by sardion2000
Originally posted by spartan433
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??
Why would a machine that is self-aware automatically want to kill anyone?
Originally posted by spartan433
Computers can never outsmart humans because
A) they can only be as smart as the humans who programmed it
B) They can not show love or any kind of emotion
C) they don't learn from their mistakes
You see the thread title? Why would you assume we would have the same technology in 40 years considering Moore's law and all ....
Self-learning neural nets.
Originally posted by spartan433
And anyway don't you think people would have common sense not to make an computer that smart. So smart its "self-aware" and wants to kill them??
Originally posted by masterp
Self-learing neural nets are not programming languages. We need ways to program those ultra-fast brains to make the calculations we want.
Originally posted by sardion2000
Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.
With binary based silicon technology ... but don't say it's impossible outright as there are several alternatives that will bring computer technology on par with the human brain. Biotech may eventually realize that goal first before Nanotech.
Originally posted by thematrix
To calculate this with current CPU's both AMD and Intel is 4 FLOPS (2 ADD+2 MUL single precision operations) per clock cycle for single core, 8 FLOPS (4 ADD+4 MUL single precision operations) for Dual Core.
This you should multiply by the clockspeed of the CPU.
An AMD Athlon 64 X2 4800+ runs at 2400Mhz
So to get the theoretical GFLOPS potential for that CPU, you get 2.4Ghzx8 = 19.2 GFLOPS.
Originally posted by masterp
Computers will never reach humans. The human brain has 300 billion synapsies, i.e. connections between processing elements...it is impossible to make a network/cluster/multicore with so many connections.
To measure real life GFLOPS (billion Floating Point Operations Per Second), you divide the number of floating point operations in a program by the execution time in milliseconds. One problem with this approach is the fact that the number of floating point ops varies from program to program. Two programs, one that’s 80% floating-point ops and one that’s 20% floating-point ops, both of which take the same amount of time to execute, will have different GFLOPS ratings.
Another, even bigger problem with GFLOPS is that the not all the same floating point instructions are implemented on all machines. One machine may use two floating-point ops to perform a particular task, while another machine may use only one. If the task is completed in the same amount of time on both machines, the one that used two ops to do it will have a higher GFLOPS rating.
In short, neither GFLOPS nor MIPS provides a reliable metric for gauging performance. The next time you see a MIPS or GFLOPS rating, notice the source—I’ll 99% guarantee you it’s a vendor. The reason for this is twofold. First, a vendor is the only one who’s really going to put in the time and effort that it takes to count up the instruction mix for a program and do all the other stuff you have to do to assign a MIPS or FLOPS rating. Second, vendors are the only people who benefit from such a rating. Most consumers don’t know enough about a vendor’s architecture to be able to determine which floating-point ops are available on it versus which FP ops are available on competing architectures. Even if a consumer did have this info, the vendor never divulges what program was used for the rating or what the instruction mix for that program was, so it wouldn’t be of any use.