It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Back in 2011, the aerospace giant Lockheed Martin paid a cool $10 million for the world’s first commercial quantum computer from a Canadian start up called D-Wave Systems. In May last year, Google and NASA followed suit, together buying a second generation device for about $15 million with Lockheed upgrading its own machine for a further $10 million.
These purchases marked the start of a new era for quantum computation. Theoretical physicists and computer scientists have been predicting for 30 years that quantum computers can dramatically outperform the conventional variety. And in May last year, these dreams at last appeared to be coming true when Cathy McGeoch at Amherst College in Massachusetts said she’d clocked the D-Wave device solving a certain class of problem some 3600 times faster than a conventional computer.
This finally backed up D-Wave’s long-pronounced but never confirmed, claims that their device was indeed faster than anything else at some tasks. Never had quantum computing’s stock ridden so high.
Since then, quantum computing—at least, D-Wave’s version of it— has undergone a dramatic change in fortune. And that culminates today with a report from a team of physicists from IBM’s T J Watson Research Laboratory in Yorktown Heights, NY, and the University of California Berkeley, who say that D-Wave’s machine may not be quantum at all. Indeed, its results could be just as easily explained if it was entirely classical.
Some techniques involve trapping ions, electrons or other tiny little particles; some propose using superconductors to create microscopic quantum circuits; others suggest it might be possible to use photons and complex optical apparatus to achieve a similar goal. What these techniques all have in common, however, is the fact that they’re currently plausible on the small scale but incredibly difficult to realize on the large. Essentially, that limits quantum computers to research machines, at least for now.
The problem is that, as good ol’ Schroedinger was only too keen to point out, quantum systems need to be isolated from the rest of world in order to work. Interactions with the external world cause the system to decohere, collapsing down and taking a binary state, just like a normal computer.
The solution? Decide on an error rate—the amount of dechorence you’re happy for the system to put up with—and design for that.
Even that's an imperfect solution, though; to have an error rate small enough that you're still getting the benefits of a respectable quantum computer, you’d need a weighty bump in the number of qubits to provider error correction, and those qubits are extremely difficult to produce in the first place, which… well, you can see where that goes.
First, there’s the question of knowing if it’s even working in the first place. A widely known tenet of quantum mechanics is that merely observing the phenomenon changes the outcome of an event. So, watch a quantum particle, or a qubit, or anything quantum for that matter, and you change its behavior. That means that it’s actually very difficult to tell if a quantum computer is behaving in the way we’d expect or need it to.
In fact, the currently available so-called quantum computers aren’t actually verified to be working the way they're supposed to. They’re simply based on the right theory, some fingers crossing, and judged by their output.
Coding a quantum computer is no mean feat; by their very nature, they give answers that are necessarily probabilistic, not concrete. For many solutions, that means that the answer isn’t necessarily bang on at first attempt; instead, the same calculation has to be repeated a number of times before the obvious correct answer emerges. In turn this means that, depending on the type of problem, there isn’t necessarily a huge amount of advantage in using a quantum computer compared to a regular one.
It's possible to exploit some of the mystical magical power of quantum mechanics to improve the speed with which solutions are reached, but so far researchers have only managed to do it for a very small set of problems, like finding the prime factors of very large numbers. That’s neat—and, it turns out, useful for cryptography—but it’s certainly limited.
The most concerning advantage relates to codebreaking. Today, communication networks pass digital information over public infrastructures, such as fiber optic pathways and wireless airwaves, using encryption to prevent eavesdroppers from reading the content of the message traffic. The only thing stopping eavesdroppers from decrypting this traffic is the mathematical complexity of doing so. Quantum computers will have the ability to crack these codes in far less time than today’s most advanced conventional computers. Furthermore, as quantum computers make linear gains in computational power, they will exponentially decrease the time it takes to break current means of encryption.
originally posted by: swanne
However, although qubits can in theory hold more information, the result isn't necessarily a more efficient computer.
That's my understanding also.
originally posted by: ErosA433
Closer to reality is
A quantum computer has the potential to be 1000s of times faster at solving specific tasks, for others though, they can be on par or even slower than a conventional one.
The operating costs of running those cooling systems isn't cheap either.
quantum devices using atomically heavy materials such as silicon or metals need to be cooled to low temperatures near absolute zero...the refrigeration systems required to cool materials close to absolute zero can cost upwards of millions of dollars and occupy physical spaces the size of large rooms.