Why every quantum computer will need a powerful classical computer

Sloth_Sloman

Wise, Aged Ars Veteran
147
I'm forever grateful to Ars (and the commenters) for their ability to convey these complicated topics to me an a way that's just understandable enough to make me forget I still have no idea how quantum computing works or is supposed to work despite probably reading or grazing at least 50 (good!) articles on the topic.

"The future of computing is checking for errors at a rate of 100TB per second!"

* brain aneurysm *

Always interesting to read, though!
 
Upvote
60 (61 / -1)

WinternetHexplorer

Smack-Fu Master, in training
29
We'll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

So what is an "interesting" versus "sophisticated" calculation? For the current state of the technology, does interesting lie in the range of long division, or more along the lines of factoring for large prime numbers for some cryptographical applications? What about sophisticated calcs?
 
Upvote
25 (26 / -1)
Post content hidden for low score. Show…

the-unknown

Ars Scholae Palatinae
690
Subscriptor++
So, next question - do you have to keep a copy of all that data so you can replay it all to figure out if something went wrong?

And if yes, what sort of storage systems are they talking about? Exabyte size? Or larger? (considering that a petabyte sized storage device will be full in 10 secs at 100 TB per sec, it does not seem big enough). And what sort of connectivity?
 
Upvote
20 (21 / -1)

Dr. Jay

Editor of Sciency Things
9,395
Ars Staff
So what is an "interesting" versus "sophisticated" calculation? For the current state of the technology, does interesting lie in the range of long division, or more along the lines of factoring for large prime numbers for some cryptographical applications? What about sophisticated calcs?
So, "interesting" in this case starts at roughly 100 error-corrected qubits, and involves a complete quantum simulation of a modest sized molecule. "Sophisticated" includes the sort of algorithms that inspired quantum computing in the first place, like factoring two large primes.
 
Upvote
42 (43 / -1)

adespoton

Ars Tribunus Angusticlavius
8,996
So, next question - do you have to keep a copy of all that data so you can replay it all to figure out if something went wrong?

And if yes, what sort of storage systems are they talking about? Exabyte size? Or larger? (considering that a petabyte sized storage device will be full in 10 secs at 100 TB per sec, it does not seem big enough). And what sort of connectivity?
Yeah; from my understanding, what's needed is not a "powerful classical computer" per se, but a specific type of powerful classic processing. Storage will need to scale to match the qbit comparisons required, with a fraction of that again to handle the comparison operations. There will then need to be a qbit mapper that can set a flag for any qbit that fails an equivalency test. We won't need the classic register/accumulator/processor structure that a modern classic computer has though; it's mostly about high volume throughput calculations backed by a basic state machine.

Replays are theoretically possible, but your qbits will be in the wrong state, so any replay is going to be HUGE since it first needs to reset the entire system. Seems to me the more likely route would be some level of redundancy where the statistical significance of the result degrades by a known and calculated amount, but the calculation continues. The algorithm could also have extra loops in it that come into play if the error margins get too large for a significant result.

"From my understanding" is a significant caveat though; I'm no quantum computing engineer.
 
Upvote
19 (19 / 0)

TimeWinder

Ars Scholae Palatinae
1,527
Subscriptor
Yeah - I am glad they are getting a handle on how to functionally make this work - but man - that really is a big chunk of classical processing to bite off every second!
And the next question becomes: how much and what kind of benefits are we getting from the quantum computer vs. what we would get if we just used that classical computing power for...classical computing?

If it requires as much or more error correcting capability to allow quantum solutions as it would to use classically deterministic solutions to the same problems, we're not gaining much. Or maybe we're only gaining at very large scales or very specific problems or something of that sort?
 
Upvote
16 (20 / -4)

adespoton

Ars Tribunus Angusticlavius
8,996
Now I get it.
Quantum Computing and Fusion Power Generation are both around the corner,
and only a decade or so away.
They both take massive infrastructure that offsets production severely.
And it sounds like we might need one for the other.
You forgot sustainable battery technology.
 
Upvote
5 (7 / -2)
Nothing like building a machine that outputs a shitload of bad data that you have to sift through for the possibility of it being right 1% of the time. Literally don't sneeze in the room, or 50ft away from it.

Reminds me of the video of some datacenter techs that would yell at a JBOD and show all of the metrics jump around due to the vibration.

Edit: Here it is.

View: https://www.youtube.com/watch?v=tDacjrSCeq4
 
Upvote
15 (20 / -5)

JudgeMental

Smack-Fu Master, in training
60
Subscriptor++
And the next question becomes: how much and what kind of benefits are we getting from the quantum computer vs. what we would get if we just used that classical computing power for...classical computing?

If it requires as much or more error correcting capability to allow quantum solutions as it would to use classically deterministic solutions to the same problems, we're not gaining much. Or maybe we're only gaining at very large scales or very specific problems or something of that sort?
The classical compute described in the article isn't generalized compute like you're probably thinking - the headline is mildly misleading in what it's implying. It's powerful in the sense that it'll (theoretically) scale to huge amounts of data handled per second, for sure. If you had to do that on an x86 or ARM computer, you'd probably be right on the money about the cost/benefits. But because they've designed dedicated hardware, they've traded the flexibility and relative inefficiency of those architectures for being really efficient at doing one thing.

I don't know enough about quantum computing to know how qubit scaling aligns to the problems solved, but my basic understanding is that you'd only need a few hundred/thousand qubits to get into really interesting territory that classical supercomputers struggle with. Assuming this chip scales somewhat linearly (starting at 8mW per logical qubit per the article), then you're still in the single/double-digit watt range by the time you get there. Total energy consumption will certainly be higher for various reasons, but even an order of magnitude or two higher is miniscule compared to the supercomputer it's theoretically replacing.
 
Upvote
17 (17 / 0)

Emon

Ars Praefectus
3,997
Subscriptor++
Twirl. Now you have e, i and pi.
This is literally true. I didn't understand Euler's identity until I saw the 3blue1brown video with the animation that shows a circle "unrolling" onto the complex plane.

Suddenly it made perfect sense. Euler's identity isn't some "strange, beautiful, inconceivable" quirk of mathematics that I had been lead to believe. After seeing the visualization I thought, "oh! Well of COURSE it's like that! It couldn't possibly be any other way!"

3D graphics are essentially a necessity for properly teaching high level math. I say "essentially" because while not required per se, I'm pretty sure 3blue1brown's videos alone could improve college level math grades across the world by a very significant amount. It helps that he wrote his own software for the animations he makes, that's really what sets his stuff apart. Which is open source, of course. Of courses courses should be using the open source course s- eh I'm trying too hard for this pun I'll stop have a nice evening everyone
 
Upvote
22 (23 / -1)

unequivocal

Ars Praefectus
4,694
Subscriptor++
That 100tb throughout sounds impressive but I'm wondering how it stacks up with traditional silicon - such as the l3 cache throughput on a modern processor? Is this throughput really that big a jump or is it more the application here that's noteworthy? (I googled around a bit for an answer for l3 cache bandwidth but didn't see an obvious answer, so hoping someone here can put this in context).
 
Upvote
3 (5 / -2)
The classical compute described in the article isn't generalized compute like you're probably thinking - the headline is mildly misleading in what it's implying. It's powerful in the sense that it'll (theoretically) scale to huge amounts of data handled per second, for sure. If you had to do that on an x86 or ARM computer, you'd probably be right on the money about the cost/benefits. But because they've designed dedicated hardware, they've traded the flexibility and relative inefficiency of those architectures for being really efficient at doing one thing.

I don't know enough about quantum computing to know how qubit scaling aligns to the problems solved, but my basic understanding is that you'd only need a few hundred/thousand qubits to get into really interesting territory that classical supercomputers struggle with. Assuming this chip scales somewhat linearly (starting at 8mW per logical qubit per the article), then you're still in the single/double-digit watt range by the time you get there. Total energy consumption will certainly be higher for various reasons, but even an order of magnitude or two higher is miniscule compared to the supercomputer it's theoretically replacing.
This chip isn't replacing a supercomputer, it is just a small component in the system. The big power draw with creating/maintaining physical qbits is still going to be there.

I think the innovation here is adding capability without increasing the (already massive) power draw significantly.
 
Upvote
4 (4 / 0)

herozero

Ars Scholae Palatinae
1,139
Subscriptor
That 100tb throughout sounds impressive but I'm wondering how it stacks up with traditional silicon - such as the l3 cache throughput on a modern processor? Is this throughput really that big a jump or is it more the application here that's noteworthy? (I googled around a bit for an answer for l3 cache bandwidth but didn't see an obvious answer, so hoping someone here can put this in context).
100TB. Big B. Impressed yet?

Dual channel DDR5 at 6000 on a Zen4 CPU hits ~96GB/s?
 
Upvote
0 (2 / -2)

Zylon

Ars Scholae Palatinae
861
Subscriptor
It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.
I found that detail surprising. My understanding of quantum error correction led me to believe that far fewer qubits would be needed.

To guarantee that a classical bit (cbit) is transmitted correctly, send it three times. Any single bit error can be detected and corrected using this encoding. Qubits are more complex (pun intended.) States like |0> + |1> and |0> - |1> differ only in phase, which has no equivalent concept in cbits.

Quantum error correction would seem to be impossible, since you can't copy qubits (the no cloning theorem) and measuring them destroys their entanglement. The One Weird Trick that makes it work is to entangle some other qubits with the encoded qubits, and then measure these ancillary qubits. You won't gain any useful information from this measurement about the actual state of the encoded qubits - again, the no cloning theorem prohibits that - but you can learn if an error occurred and how to correct it!

Yes, that's as bizarre as it sounds. Welcome to quantum computer science. That's what these guys are doing in their FPGAs: detecting the error state of the encoded qubit and calculating the appropriate error correction to apply to the encoding.

In his original paper on quantum error correction, Schor (yes, the same Schor from the prime factoring algorithm) found a nine-qubit encoding that corrects all types of qubit errors. It has since been proven that five is the minimum needed, although it requires quantum gates that are difficult to realize in hardware. A seven qubit-encoding that uses only simple gates looks like the sweet spot. Hence my confusion about dozens of qubits needed for each logical qubit.
 
Upvote
9 (11 / -2)

Dark Jaguar

Ars Tribunus Angusticlavius
9,874
I found that detail surprising. My understanding of quantum error correction led me to believe that far fewer qubits would be needed.

To guarantee that a classical bit (cbit) is transmitted correctly, send it three times. Any single bit error can be detected and corrected using this encoding. Qubits are more complex (pun intended.) States like |0> + |1> and |0> - |1> differ only in phase, which has no equivalent concept in cbits.

Quantum error correction would seem to be impossible, since you can't copy qubits (the no cloning theorem) and measuring them destroys their entanglement. The One Weird Trick that makes it work is to entangle some other qubits with the encoded qubits, and then measure these ancillary qubits. You won't gain any useful information from this measurement about the actual state of the encoded qubits - again, the no cloning theorem prohibits that - but you can learn if an error occurred and how to correct it!

Yes, that's as bizarre as it sounds. Welcome to quantum computer science. That's what these guys are doing in their FPGAs: detecting the error state of the encoded qubit and calculating the appropriate error correction to apply to the encoding.

In his original paper on quantum error correction, Schor (yes, the same Schor from the prime factoring algorithm) found a nine-qubit encoding that corrects all types of qubit errors. It has since been proven that five is the minimum needed, although it requires quantum gates that are difficult to realize in hardware. A seven qubit-encoding that uses only simple gates looks like the sweet spot. Hence my confusion about dozens of qubits needed for each logical qubit.
Thank you so much for all this! Even though I read the article, I was unclear on how reliable error detection actually was with this method. I had presumed that full error protection was flat out impossible without using other quantum bits which themselves would be prone to errors. It's clear you know your stuff.

All I can presume, taking what you've said in combination with the article, is that it's a matter of the difference between a theoretical minimum and practical realities that complicate construction and make that 7 qubit minimum infeasible.
 
Upvote
0 (1 / -1)

unequivocal

Ars Praefectus
4,694
Subscriptor++
100TB. Big B. Impressed yet?

Dual channel DDR5 at 6000 on a Zen4 CPU hits ~96GB/s?
Thanks - that is good context but "on wafer" throughput to L3 cache should be much, much faster that bus throughput to ddr5 ram right? L3 cache transfer speed is typically measured in cpu clock tics not seconds (which is why it was hard for me to find out its per sec bandwidth), but suggests to me that it's very fast when compared to ram (it's literal point is to hold data so the cpu doesn't have to ask for data over the slower ram bus)..

I'm just curious if this company has developed new solutions or if this is similar to what's been done on silicon before and they are applying it to a new problem in a clever way?
 
Upvote
2 (2 / 0)

JudgeMental

Smack-Fu Master, in training
60
Subscriptor++
This chip isn't replacing a supercomputer, it is just a small component in the system. The big power draw with creating/maintaining physical qbits is still going to be there.

I think the innovation here is adding capability without increasing the (already massive) power draw significantly.
Yeah, I was attempting to reference the chip and the total energy cost it would add to the overall system - chiplet tax, extra data buses, etc. I tried to look into estimates of energy consumption per real qubit to compare, but couldn't really find anything I was comfortable with using for hard numbers. It did bring up a different concern - since this chip needs some multiples of real qubits to simulate a logical qubit, that would certainly inflate the energy needs of an entire system. But that leads to an entire rabbit trail of relative efficiencies and scaling impacts that I just don't feel qualified to comment on.
 
Upvote
-1 (0 / -1)