Holy shit. That's a big Twinkie.Error-correcting a quantum computer will mean processing 100TB every second.
Yeah - I am glad they are getting a handle on how to functionally make this work - but man - that really is a big chunk of classical processing to bite off every second!Holy shit. That's a big Twinkie.
We'll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.
So, "interesting" in this case starts at roughly 100 error-corrected qubits, and involves a complete quantum simulation of a modest sized molecule. "Sophisticated" includes the sort of algorithms that inspired quantum computing in the first place, like factoring two large primes.So what is an "interesting" versus "sophisticated" calculation? For the current state of the technology, does interesting lie in the range of long division, or more along the lines of factoring for large prime numbers for some cryptographical applications? What about sophisticated calcs?
Yeah; from my understanding, what's needed is not a "powerful classical computer" per se, but a specific type of powerful classic processing. Storage will need to scale to match the qbit comparisons required, with a fraction of that again to handle the comparison operations. There will then need to be a qbit mapper that can set a flag for any qbit that fails an equivalency test. We won't need the classic register/accumulator/processor structure that a modern classic computer has though; it's mostly about high volume throughput calculations backed by a basic state machine.So, next question - do you have to keep a copy of all that data so you can replay it all to figure out if something went wrong?
And if yes, what sort of storage systems are they talking about? Exabyte size? Or larger? (considering that a petabyte sized storage device will be full in 10 secs at 100 TB per sec, it does not seem big enough). And what sort of connectivity?
And the next question becomes: how much and what kind of benefits are we getting from the quantum computer vs. what we would get if we just used that classical computing power for...classical computing?Yeah - I am glad they are getting a handle on how to functionally make this work - but man - that really is a big chunk of classical processing to bite off every second!
And certainty less.(I know, there's certainly more to it!)
Twirl. Now you have e, i and pi.All this so we can work directly and physically with the square root of -1.
(I know, there's certainly more to it!)
You forgot sustainable battery technology.Now I get it.
Quantum Computing and Fusion Power Generation are both around the corner,
and only a decade or so away.
They both take massive infrastructure that offsets production severely.
And it sounds like we might need one for the other.
Downvoters, be nice! This was just a comment intended for a different thread, and who among us…Pretty sure 5G doesn’t use RADIUS, it uses Diameter. Even LTE could use Diameter.
The classical compute described in the article isn't generalized compute like you're probably thinking - the headline is mildly misleading in what it's implying. It's powerful in the sense that it'll (theoretically) scale to huge amounts of data handled per second, for sure. If you had to do that on an x86 or ARM computer, you'd probably be right on the money about the cost/benefits. But because they've designed dedicated hardware, they've traded the flexibility and relative inefficiency of those architectures for being really efficient at doing one thing.And the next question becomes: how much and what kind of benefits are we getting from the quantum computer vs. what we would get if we just used that classical computing power for...classical computing?
If it requires as much or more error correcting capability to allow quantum solutions as it would to use classically deterministic solutions to the same problems, we're not gaining much. Or maybe we're only gaining at very large scales or very specific problems or something of that sort?
This is literally true. I didn't understand Euler's identity until I saw the 3blue1brown video with the animation that shows a circle "unrolling" onto the complex plane.Twirl. Now you have e, i and pi.
There should be a "sidevote" button for these kinds of pun posts. The internet equivalent of a collective groan.Downvoters, be nice! This was just a comment intended for a different thread, and who among us…
This chip isn't replacing a supercomputer, it is just a small component in the system. The big power draw with creating/maintaining physical qbits is still going to be there.The classical compute described in the article isn't generalized compute like you're probably thinking - the headline is mildly misleading in what it's implying. It's powerful in the sense that it'll (theoretically) scale to huge amounts of data handled per second, for sure. If you had to do that on an x86 or ARM computer, you'd probably be right on the money about the cost/benefits. But because they've designed dedicated hardware, they've traded the flexibility and relative inefficiency of those architectures for being really efficient at doing one thing.
I don't know enough about quantum computing to know how qubit scaling aligns to the problems solved, but my basic understanding is that you'd only need a few hundred/thousand qubits to get into really interesting territory that classical supercomputers struggle with. Assuming this chip scales somewhat linearly (starting at 8mW per logical qubit per the article), then you're still in the single/double-digit watt range by the time you get there. Total energy consumption will certainly be higher for various reasons, but even an order of magnitude or two higher is miniscule compared to the supercomputer it's theoretically replacing.
Isn’t that exactly what logical qubits do?I wonder at times if we're looking at quantum computing all wrong and that perhaps we should consider that errors are simply going to be a part of such computations, and build around that expectation.
100TB. Big B. Impressed yet?That 100tb throughout sounds impressive but I'm wondering how it stacks up with traditional silicon - such as the l3 cache throughput on a modern processor? Is this throughput really that big a jump or is it more the application here that's noteworthy? (I googled around a bit for an answer for l3 cache bandwidth but didn't see an obvious answer, so hoping someone here can put this in context).
I found that detail surprising. My understanding of quantum error correction led me to believe that far fewer qubits would be needed.It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.
Thank you so much for all this! Even though I read the article, I was unclear on how reliable error detection actually was with this method. I had presumed that full error protection was flat out impossible without using other quantum bits which themselves would be prone to errors. It's clear you know your stuff.I found that detail surprising. My understanding of quantum error correction led me to believe that far fewer qubits would be needed.
To guarantee that a classical bit (cbit) is transmitted correctly, send it three times. Any single bit error can be detected and corrected using this encoding. Qubits are more complex (pun intended.) States like |0> + |1> and |0> - |1> differ only in phase, which has no equivalent concept in cbits.
Quantum error correction would seem to be impossible, since you can't copy qubits (the no cloning theorem) and measuring them destroys their entanglement. The One Weird Trick that makes it work is to entangle some other qubits with the encoded qubits, and then measure these ancillary qubits. You won't gain any useful information from this measurement about the actual state of the encoded qubits - again, the no cloning theorem prohibits that - but you can learn if an error occurred and how to correct it!
Yes, that's as bizarre as it sounds. Welcome to quantum computer science. That's what these guys are doing in their FPGAs: detecting the error state of the encoded qubit and calculating the appropriate error correction to apply to the encoding.
In his original paper on quantum error correction, Schor (yes, the same Schor from the prime factoring algorithm) found a nine-qubit encoding that corrects all types of qubit errors. It has since been proven that five is the minimum needed, although it requires quantum gates that are difficult to realize in hardware. A seven qubit-encoding that uses only simple gates looks like the sweet spot. Hence my confusion about dozens of qubits needed for each logical qubit.
and in the afternoon he was working with the iron trap qubits.
This appears to be a not-uncommon typo where further instances in the same source texts (often academic/scientific) show "ion trap". Apparently more non-technical people are involved in working up papers than I would have expected.”iron trap” —> “ion trap”, perhaps?
Thanks - that is good context but "on wafer" throughput to L3 cache should be much, much faster that bus throughput to ddr5 ram right? L3 cache transfer speed is typically measured in cpu clock tics not seconds (which is why it was hard for me to find out its per sec bandwidth), but suggests to me that it's very fast when compared to ram (it's literal point is to hold data so the cpu doesn't have to ask for data over the slower ram bus)..100TB. Big B. Impressed yet?
Dual channel DDR5 at 6000 on a Zen4 CPU hits ~96GB/s?
Yeah, I was attempting to reference the chip and the total energy cost it would add to the overall system - chiplet tax, extra data buses, etc. I tried to look into estimates of energy consumption per real qubit to compare, but couldn't really find anything I was comfortable with using for hard numbers. It did bring up a different concern - since this chip needs some multiples of real qubits to simulate a logical qubit, that would certainly inflate the energy needs of an entire system. But that leads to an entire rabbit trail of relative efficiencies and scaling impacts that I just don't feel qualified to comment on.This chip isn't replacing a supercomputer, it is just a small component in the system. The big power draw with creating/maintaining physical qbits is still going to be there.
I think the innovation here is adding capability without increasing the (already massive) power draw significantly.
Rotten tomato button.There should be a "sidevote" button for these kinds of pun posts. The internet equivalent of a collective groan.
Or that oversized crook they use to pull bad performances off the stage?Rotten tomato button.