What has Schrodinger got to do with the complex and paradoxical concept of quantum computing? The answer is that both Erwin and his hypothetical cat relate directly to this subatomic world. In fact, Schrodinger is often referred to as the father of quantum mechanics.
Erwin who?
Erwin Schrodinger was an Austrian theoretical physicist, awarded a Nobel Prize way back in 1933 for his work with Paul Adrien Maurice Dirac regarding the discovery of 'new productive forms of atomic theory.'
Two years later, Schrodinger proposed an experiment whereby a hypothetical cat and a radioactive sample are sealed within a steel box for an hour. So far, so weird. But wait, it gets weirder. If a radioactive atom decayed within that time then courtesy of a relay from a Geiger counter also in the container, it would release a small hammer that breaks a flask of hydrocyanic acid and the cat dies. If no atom had decayed within that hour, and the chances were pretty equal that it would or wouldn't, then the flask remains intact as does the feline.
The Copenhagen Interpretation
Now we need to explain the Copenhagen interpretation which isn't an episode of the Big Bang Theory but rather a concept in quantum mechanics that a particle exists in all possible states simultaneously until observed. The Copenhagen interpretation determines that the radioactive material is both decayed and not decayed at the same time. It is in a state of 'superposition' until it is observed. Which would logically mean that Schrodinger's Cat is also both alive and dead simultaneously, unobserved, within that sealed container.
Schrodinger posed the question: at what point does a quantum system stop existing as a superposition of states and simply become one or the other? He thought that the cat was dead or alive regardless of whether you observed it or not, and ditto the state of a quantum system. History has a funny way of not working out quite how you intended, and that's certainly the case for Erwin and his fantastical feline.
After Schrodinger and his cat
The legacy of this theoretical experiment has been to drive many different approaches to quantum theory over the years that seek to advance the notion of superposition. There's the Many Worlds interpretation of the experiment that suggests when the box is opened both the observer and cat, or cats as one is alive and the other dead, split into separate realities and so Copenhagen still applies for example.
How does a quantum computer work then?
OK, so what exactly is a quantum computer then? Let's go back to the Copenhagen interpretation shall we? This came about as a result of work done in the 1930's by Albert Einstein, along with Boris Podolsky and Nathan Rosen, that became known as the EPR paradox. That's where the notion of quantum superposition was really brought into the open. It suggested that an atom, or a photon, could exist in multiple states at the same time that corresponded to multiple possible outcomes. Copenhagen took the theory further and stated the quantum system stayed in this state of superposition until there was an external interaction or observation. Only then would this superposition collapse into one of the possible outcome states.
In quantum computing, you have to think of the traditional bits at the heart of a binary system (which are either one or zero) as quantum bits, or qubits, instead. These qubits, as per Copenhagen, can be one or zero at the same time. What's more, by creating a correlated states of a number of qubits, they can be anything in-between one and zero simultaneously as well; when they become entangled states.
Faster, pussycat, faster
Why is this such a big deal? Simply because it can do away with the linear processing where one calculation is done after another to arrive at an answer. With qubits existing of every value at the same time, a quantum computer can test every single possibility at the same time and when you collapse that calculation by observing it in any given state you have the answer. Very quickly indeed. Sort of. The practicalities of controlling qubits (and their subatomic particle storage system) are not at all simple, or cheap to develop.
How much faster are we talking about? Well, when Google researchers and NASA boffins put a D-wave quantum computer to the test (yes, they do exist, more of that in a moment) they concluded it was 100 million times faster.
These computations require an interaction of qubits, and in order to function in a way we would recognise as a usable computer requires lots of them. That's where the real problems in developing quantum computers begin. While the likes of IBM and D-wave have produced limited qubit machines already, we are still some way off truly practical quantum computers with the kind of true qubit numbers required to make it workable.
That's because of the control issue when you have such a complex system of qubits existing together. A supercooled environ is required to maintain control over quantum superposition and entanglement. How cold is supercold? As near as absolute zero we can get. Which in the case of the D-Wave quantum computer (at a cost of around $15 million) is around -273C. The latest D-Wave quantum processor contains 2,000 qubits but that may not be as exciting as it sounds.
While some of the world's biggest companies, including Google, have bought one and that's where the 100 million times faster calculation number mentioned earlier comes from, the scientific jury is still out. That's because D-Wave uses a different type of quantum computing known as 'annealing' or 'topological quantum computing'. Critics suggest that the qubits used are of 'low quality' and the tests to get those impressive figures optimised for the system architecture. Certainly it's a different approach to the accepted notion of a quantum computer.
That said, the Quantum AI Lab at Google is already testing a prototype 20 qubit processor of its own and reckons it will have cracked a working 49 qubit chip by 2018. It is currently working on getting the error rate which is known as 'two-qubit fidelity' to an acceptable level that would enable the processor to beat existing supercomputers. Error-correcting is just another hurdle that has to be cleared before quantum computing becomes anything like workable. Don't expect the in-lab stuff to make any kind of appearance commercially any time soon. Current estimates reckon that we are looking at ten years at least before stable and error-corrected quantum computers of this type become a reality.
The graphene connection
One of the developments that could drive down the wait is graphene. An atomic-scale hexagonal lattice made of carbon atoms, graphene is one the real scientific wonderstuffs of recent times. 200 times stronger than steel, an efficient heat and electrical conductor, cheap to produce and atomically small. Now researchers have been using it to create a quantum capacitor that has better electromagnetic interference resistance than traditional designs. This can, the researchers hope, be used to create the stable qubits that are required for quantum computing and make them resistant to electromagnetic interference at the same time.
Quantum crypto
This all sounds very interesting from a geek perspective, but what about the real world implications of quantum computing? I'm a security geek by trade, so the consequences of being able to factor very large primes instantly certainly worries me. Why? Because the resource usage and time required to factor these primes sit at the very heart of the encryption systems we use to protect our data today. If a quantum computer can bring the process of cracking a crypto key down from hundreds of thousands, even billions in some cases, of years to just seconds then we are all in very big trouble. Apart from the fact that quantum computing research is stupidly expensive and the good guys are likely to crack it, no pun intended, long before the bad guys get to have a go.
And anyway, remember I mentioned something called entanglement earlier? Well that is something of a saving grace for the quantum crypto systems of the future. What Einstein referred to as “spooky action at a distance” entanglement, together with the wave function collapse phenomenon, looks set to build a 'perfect privacy' technology know as Quantum Key Distribution. These are already being developed, and work by applying the Heisenberg Uncertainty Principle which can be boiled down to not being able to observe something without changing the thing you are observing.
A crypto system firing single photons at a million times per second along fibre optic cables between two nodes can be used. Detectors at the network nodes play spot the photon, and determine a secret key to encode the data across that communication channel. Anyone attempting to eavesdrop upon that channel, a hacker for example, creates a disturbance that according to Heisenberg scrambles the photons. The presence eof the hacker is detected, that comms channel is closed and another established to try again. All in an instant. Only an unobserved channel would successfully transmit the packet of data.
What does a quantum computing future promise?
Security implications apart, what does the future of quantum computing hold for us? Well, in the UK the Ministry of Defence labs have been looking at using quantum computing to measure the smallest fluctuations in the gravitational pull of the Earth which could lead to a radar system that could both see through walls and deep underground. Sticking with the UK, the University of Glasgow has been researching the use of quantum computing to develop imaging cameras that could detect light to a single photon. The implications upon healthcare are immense; this could lead to an ability to detect cancer without any invasive imaging for example.
In Australia there are researchers also looking at healthcare and medical imaging. The Centre for Quantum Computation and Communication Technology has a nano-MRI project that makes use of the qubit's magnetic properties to theoretically produce images at the molecular level.