Quantum Confusion Keeps Coming with Qubits

Schrödinger’s cat is scratching its head over the “topological” qubit that is causing a buzz in quantum computing. We should be, too

IIllustrated polygonal cat looking at viewer

Phatrapong/Getty Images

Floating above the world of computing—and the investors who love it—flies the notion that quantum physics will soon change everything. Quantum computers will design us new drugs, new batteries, and more. And then they’ll break standard encryption protocols, leaking our credit-card data before we can buy any of these wonders.

You can’t swing a simultaneously dead and alive cat without hitting another claim that we are this close to harnessing the magical quantum computing bit—the qubit—that physicist Richard Feynman first challenged computer scientists to deliver nearly 45 years ago.

The problem is that Feynman’s “machine of a different kind”—which takes advantage of quantum physics’ tantalizing weirdness—requires extremely hard physics. Whether Microsoft or Amazon or anyone else gushing in press releases has achieved it remains nebulous.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


For now, it seems, your encrypted data are safe(ish). As one financial industry observer speaking of the technology told the Wall Street Journal in February, “seems like it’s always five years away”—or 20 years, according to Nvidia CEO Jensen Huang.

Or maybe it will never arrive. Today’s quantum computing investors are essentially backing competing physics experiments in a derby to create Feynman’s machine. Although they’ve made progress, no clear winner has emerged. That’s despite claims of moves into practical use and demonstrations of small quantum computers outperforming ordinary ones in special cases.

Graphic compares classical bits with qubits and explains how the properties of superposition and entanglement allow quantum computers to surpass the capabilities of classical machines.

Jen Christiansen

The fundamental problem remains, however, that making the robust qubits at the heart of a quantum computer—as opposed to the simpler bits processed by your laptop one—requires hard-to-do physics.

Today’s computers just manipulate bits, which are fixed at either a 0 or a 1 in value. They perform calculations in assembly-line fashion. Qubits instead have values simultaneously suspended in superposition between 0 and 1, like Schrödinger’s poor cat. Their value is any inclusive number of possibilities between 0 and 1 in a qubit. Instead of an assembly line of bits, an array of linked, or “entangled,” qubits effectively solve problems in exponentially fast leaps by holding all possible values simultaneously as they make their calculations.

The trouble has been in finding the right qubit. They are very fragile while calculations are being performed on them, which can lead to disqualifying error rates. That has led to years of incremental advances. The latest turn was Microsoft’s February 2025 announcement of a “topological” quantum computer, touted in a news release and accompanied by an experimental paper in Nature. (Nature and Scientific American are both part of Springer Nature.) Basically, the makers of Windows are betting they can create a bleeding-edge quantum physics effect inside superconducting aluminum wires. That effect is the induction of a still theoretical Majorana particle that behaves magnetically as both an electron and an anti-electron (as in antimatter) in those wires. That magnetic state of the “quasiparticle” inside the wire is the qubit.

You can't swing a simultaneously dead and alive cat without hitting another claim that we are those close to harnessing the magical quantum computing bit.

Theoretically it would be a more stable qubit because interference would have to scramble both ends of the wire simultaneously to destroy any information it encoded. Microsoft said in the news release: “The Majorana 1 processor offers a clear path to fit a million qubits on a single chip that can fit in the palm of one’s hand,” referring to the threshold of qubits for a useful quantum computer.

That sounds great. But, critics soon noted, Nature’s editors included a peer-review note with the Microsoft study that disavowed some of the news release’s claims. In particular, they noted the paper had not definitely shown “Majorana zero modes” in the computer. Translated, that means they want to see more proof. (Microsoft claimed it had created these quasiparticles in 2018, only to have to retract the claim, doubtless adding to the scrutiny.)

Google likewise unveiled a “state-of-the-art quantum chip” for computing last December, and it used “transmon” qubits, first proposed in 2007. These rely on oscillating currents traveling inside 150-micron-wide superconducting capacitors. This Willow chip holds 105 qubits. Only 999,895 to go. Meta’s Mark Zuckerberg soon cast doubt on the technology, depressing quantum computing stocks.

And finally, to end the month of February, Amazon announced an “Ocelot” quantum computing chip with nine qubits, along with its own Nature paper. That chip manipulates a superconducting resonator to serve as a qubit, within which error-tolerant “cat” qubits (named in honor of Schrödinger’s kitten, natch) are controlled by photons, or light particles. The error-correcting capabilities of cat qubits were first demonstrated in 2020.

Other quantum computing approaches would, for example, suspend single ions on a circuit board, turning a single cadmium ion into a qubit, or would use the photons inside laser pulses as the qubits read out by photodetectors.

These are all big bets on engineering 21st-century physics into machinery, with notable progress demonstrated in the past decade, making it possible to see the quantum glass as half-full instead of half-empty. There are many ways to skin the quantum computing cat, it turns out.

Yet no certainty exists that any of these models will work in the end. That raises fears of the field becoming, like nuclear fusion, another physically possible but Sisyphean technology, always 20 years away.

The myriad approaches to creating a quantum computer, all still provisional depending on whether they can be scaled up to the million-qubit realm, reinforce that the field is still in its adolescence. News-release-driven announcements of new chips, with accompanying Wall Street hype, threaten a new kind of tech bubble just as the artificial-intelligence one is fading, which may explain a lot of the attention.

I write this as a science reporter whose first quantum computing story covered a proposal in the journal Science in 1997 to cook up a quantum computer in a coffee cup by using a nuclear magnetic resonance spectrometer (an idea still pursued as of last year). “There’s lots of physics between here and a working quantum computer,” IBM’s David DiVincenzo told me then. He wasn’t kidding.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Dan Vergano is senior opinion editor at Scientific American, where he writes the weekly column Argonaut. He has previously written for Grid News, BuzzFeed News, National Geographic and USA Today. He is chair of the New Horizons committee for the Council for the Advancement of Science Writing and a journalism award judge for both the American Association for the Advancement of Science and the U.S. National Academies of Sciences, Engineering, and Medicine.

More by Dan Vergano
SA Special Editions Vol 34 Issue 2sThis article was originally published with the title “Quantum Confusion” in SA Special Editions Vol. 34 No. 2s (), p. 114
doi:10.1038/scientificamerican062025-5KMP169E1XeEYFxWcFH5FW