The narrative surrounding quantum computing is saturated with hype, promising solutions to humanity’s toughest challenges while fueling a multi-billion-dollar investment frenzy. Yet, the reality is a monumental scientific and engineering challenge, with fault-tolerant machines only an unforeseeable imaginary future due to the exponential costs of Quantum Error Correction (QEC).

This essay examines the long-term prospects of quantum computing and the overhyped investment landscape. Using straight language where possible to preserve significance and honesty, we aim to cut through the fervor with rigor and square hype with realism.

What Are the Long-Term Promises of Quantum Computing?

Quantum computing leverages superposition, entanglement, and interference to perform computations unattainable by classical computers. Its long-term potential lies in solving problems with exponential complexity, such as:

  • Drug Discovery: Simulating molecular interactions at the quantum level to design new pharmaceuticals.
  • Optimization: Solving complex logistical problems, like supply chain management or traffic routing.
  • Materials Science: Designing novel materials with tailored properties, such as superconductors.
  • Machine Learning: Accelerating algorithms for pattern recognition and data analysis.

Theoretically, algorithms like Shor’s (for integer factorization) and Grover’s (for search problems) offer exponential or quadratic speedups over classical methods. However, these benefits hinge on building fault-tolerant quantum computers with thousands of stable logical qubits, a goal far beyond current capabilities due to the brutal, soul-crushing physics of noise. The dream is exciting, but the timeline is measured in decades, possibly even never, definitely not years.

Why Is Quantum Computing So Difficult to Achieve?

The core obstacle is the fragility of quantum states. Physical qubits, including superconducting circuits, trapped ions, or neutral atoms, are prone to decoherence from environmental noise (e.g., magnetic fields, temperature fluctuations), resulting in error rates of up to 40% per gate operation. Even with very expensive error corrections, the most cutting-edge achievement is to have only a dozen logical qubits with an error rate of above 1% per gate operation. This renders direct computation unreliable for algorithms requiring just a hundred logical operations, let alone real problems such as breaking a modern encryption algorithm, which would require billions of logical operations. See The Quantum Cryptopocalypse Myth.

Quantum Error Correction (QEC) postulates to solve this problem by bundling hundreds or thousands of noisy physical qubits into a single, stable logical qubit, employing multilayered error correction. It’s possible mathematically, but extremely costly in engineering, and potentially even impossible according to strict science.

The cost of multilayered QEC is staggering:

  • Level 1 Logical Qubit: Takes ~20–100 physical qubits to achieve a logical error rate better than the physical rate (e.g., 1% vs. 10%).
  • Higher Levels: Each additional layer multiplies the physical qubit requirement. A logical qubit with a 10^-12 error rate, needed for practical algorithms, might require tens of thousands of physical qubits.

To perform meaningful computations, thousands of logical qubits with extremely low error rates are needed, translating to many millions of pristine physical qubits.

In mid-2025, the most advanced systems have ~1,000 physical qubits, with logical qubit counts claimed at 48 (Harvard/QuEra) or 24 entangled (Atom Computing/Microsoft) under ideal experimental conditions, with error rates many orders of magnitude above the threshold for practical applications. This means that the so-called “logical qubits” in these systems can barely meet the condition of the “level 1” standard described above, orders of magnitude below the sufficient multilevel standard. See below for more.

The reality underscores the continental abyss between current hardware and practical utility.

Concatenated QEC to Rescue?

In quantum error correction, it is often said that adding layers of correction squares the error rate. This relates to a key aspect of quantum error correction, which promises exponential error reduction. If each layer simply multiplies the error reduction, the improvement would be linear, not exponential.

The putative power of concatenated quantum error correction comes from a much more powerful, non-linear scaling theorized as below:

  • Level 1: You take a bundle of very noisy physical qubits and encode them into a single, more robust Level 1 Logical Qubit. Let’s say this process is successful and reduces the error rate to a manageable level, for instance, 1% (10^-2). This is your new, improved building block.
  • Level 2: To get an even better qubit, you don’t just add more physical qubits to the original bundle. Instead, you take several of your new and improved Level 1 Logical Qubits and bundle them together into a new, higher-level code to create a Level 2 Logical Qubit. This creates coordinated redundancy. The theory is that, for a Level 2 code to fail, it generally requires at least two of its constituent Level 1 qubits to have an error simultaneously. The probability of two nearly independent failure events happening is the product of their individual probabilities. So, if the chance of one Level 1 logical qubit failing is 10^-2, the chance of two of them failing in a way that breaks the Level 2 code is roughly (10^-2) × (10^-2) = (10^-2)^2 = 10^-4.
  • Level 3: Encoding Level 2 qubits yields (10^-4)^2 = 10^-8, and so on.

This exponential noise reduction is what makes believers in quantum computing believe it’s a realistic, albeit incredibly difficult, path to achieving the near-perfect logical qubits required for practical quantum computing.

However, none of that is proven. It only exists in math. There is no engineering proof. There is not even scientific proof.

The Exponential Qubit Cost Negates QEC’s Advantage

QEC postulates that the exponential cost does not cancel out for the advantage gained. According to QVC, quantum computing will have superior scalability provided one critical condition is met: the Fault-Tolerance Threshold Theorem. It states that if the error rate of individual physical components is below a certain critical value (the threshold), you can use Quantum Error Correction to make the error rate of your logical qubits arbitrarily low.

  • The “If” – The Entry Price: There is a “break-even” point for physical qubit quality, estimated to be an error rate of around 0.1% to 1%. If your hardware is worse than this, QVC would not work. Every physical qubit you add introduces more noise than the scheme can handle. Concatenation makes things worse.
  • The Net Advantage: Once the error rate of physical qubits is below the threshold, you are in a winning regime. While you are indeed paying an exponential cost in physical qubits, the resulting drop in the logical error rate is even more powerful. The gain is thus not canceled; it is a true net advantage. The reliability compounds faster than the costs.

However, the above-described multilayered concatenated quantum error correction scheme is based on a huge assumption that is almost certainly false: that the failure or error of the quantum qubit is a mere random and linear phenomenon.

But that assumption may be false. The failure or error of quantum qubits is a result of losing the coherence of quantum entanglement. This process is definitely not linear with time. That is, when the time span doubles, the chance for a quantum qubit to lose coherence does not double, but is orders of magnitude higher. And the speed of such decay is determined by the overall system condition. As a result, the assumption that the occurrences of an error of two logical qubits are completely independent of each other may not be true. They may be correlated, either through the shared common environment and system condition, or even by some unknown physics.

Thus, the exponential increase in the reliability of concatenated quantum error correction layers may not pan out in reality. We could very well end up with a situation where, no matter how much error correction the system has, it never catches up to the amount of resources consumed for any such error correction.

For example, if doubling the number of effective logical qubits would require more than twice of energy and materials, the whole system is not scalable. In this situation, QEC uses the redundancy of the previous layer’s logical qubits. But by doing that, the number of physical qubits required would also be increasing exponentially, canceling out any advantage you gain using the concatenated quantum error correction. So the impressive numbers are merely mathematical games and don’t really create any physical advantage.

Even in a much more favorable scenario, if each layer of error correction squares the reliability, it also just squares the resources; the exponential gain in reliability is offset by an exponential increase in cost, resulting in no net physical advantage.

In that case, the multilayered quantum error correction is merely a mathematical game and does not represent a real physical advantage. A useful quantum computer would be impossible.

Can the Current Quantum Computing Systems Scale to Millions of Qubits?

Even if the QEC itself holds true, the following independent practical question remains: can the current quantum computing systems scale to millions of qubits?

In quantum computing, everything is built upon the physical qubit. Even if the QEC theorem holds true, it will require very high-quality physical qubit, which requires extreme conditions. The kind of conditions, specifically pristine fabrication, extreme isolation, ultra-precise control, error mitigation, for a single physical gate, good luck scaling.

There are clearly too many dreamers in the quantum computing field.

A favorite counterargument by the believers of quantum computing is to use the history of semiconductor development as an analogy. After all, when the first semiconductor integrated circuit was made in 1958, no one could have imagined the system would be scaled billions of times in less than 70 years.

But the analogy to the semiconductor transistor is invalid. Semiconductor manufacturing was cheap and scalable from the get-go. It was inherently cheap in physics, although initially not in engineering. The quantum computing field is entirely different. There simply isn’t a unit or element (physical gate) that can be made cheaply because it requires conditions such as extremely low temperature and superconducting conditions. Unless an entirely new phenomenon is discovered, I do not believe quantum computing is scalable as it stands now. It is not only non-scalable in terms of engineering, but also in terms of physics, which is far more fundamental.

At a more fundamental level, the above distinction is crucial. The physics of a silicon transistor is fundamentally amenable to mass production. The physics of a superconducting qubit, the platform most commonly associated with quantum computing, is the exact opposite. They are inherently hostile to scale and economy, demanding near-absolute zero temperatures and exquisite isolation. If superconducting qubits were the only path forward, the endeavor would be physically, not just engineeringly, non-scalable.

The hope

There is still hope, though. The quantum computing field is not a single, monolithic effort. It is an intense, high-stakes race between competing physical platforms, each one representing a different bet on which underlying physics will ultimately prove scalable. The leading contenders include:

  • Trapped Ions: Use individual charged atoms held by electromagnetic fields. They have very high fidelity and operate at more manageable temperatures, but scaling the complex laser control systems is the core physics problem.
  • Photonics: Use particles of light on silicon chips. This platform is inherently scalable and room-temperature, leveraging existing fabrication infrastructure. However, making photons interact reliably to perform two-qubit gates is the fundamental physics challenge.
  • Neutral Atoms: Use uncharged atoms held in vast arrays by lasers. The scalability in number is breathtaking, but achieving high-fidelity control remains a significant challenge.

The “dream” of the quantum industry is that one of these competing physical platforms will prove to have more favorable scaling laws. Based on the physics experimented so far, no clear scalable path exists. It is a physics problem. Some of the world’s top physicists are motivated to explore these different, competing, and uncertain paths forward. So there may still be hope.

Why Are Quantum Computing Investments Overhyped?

At the same time, the “hope” is hyped many times greater than it really is, drawing many billions of dollars of speculative investments into the field. Perhaps it’s a good thing, because even if the money is wasted, it probably still is far from being the worst way to waste money today.

The quantum market is projected to reach $65 billion by 2030 (McKinsey, 2023), with startups raising $2.2 billion in 2023 (Bloomberg, 2024).

Yet, hype outpaces reality:

  • Physical vs. Logical Qubits: Companies boast processors with 1,000+ physical qubits, but logical qubits (dozens experimentally) are the true metric for utility.
  • Misleading Milestones: Achievements like entangling 24 logical qubits are benchmarks, not commercial tools, yet inflate valuations.
  • Exponential Costs: QEC’s qubit overhead—1,000–20,000 physical qubits per logical qubit—demands billions in infrastructure, far beyond current ~1,000-qubit systems.

Noisy Intermediate-Scale Quantum (NISQ) devices promise near-term applications, but a 2024 National Academies report doubts their advantage over classical supercomputers. Investors risk a bubble, driven by overblown “quantum advantage” claims and timelines of almost imminent achievement for fault-tolerance.

Are There Realistic Near-Term Applications?

NISQ devices target hybrid algorithms (e.g., Variational Quantum Eigensolver) for optimization or small-scale quantum simulation. These lack proven superiority over classical methods and are limited by noise. Long-term applications—drug discovery, materials design—require fault-tolerant systems, making near-term returns speculative. The field’s value lies in foundational research, not imminent products. In other words, the only paying customers in quantum computing are just exploring and researching quantum computing themselves. The quantum hype supports such budgets. In this sense, the field is kind of self-sustaining, but not in the real economic sense.

How Realistic Is Fault-Tolerant Quantum Computing?

The Fault-Tolerance Threshold Theorem underpins the quantum optimism. Concatenated QEC’s exponential error suppression (10^-2 to 10^-16 over five layers) offers a path to 10^-12 error rates, but at a cost of millions of physical qubits. Progress has been made: physical qubit counts rose from 50 in 2019 to 1,000+ in 2025, and QEC experiments for scaling are being performed, albeit in unrealistic conditions. Google and IBM target 100 logical qubits by 2030, enabling small-scale tasks, but full fault-tolerance is only found in the enthusiastic theories.

The physics of scalability remains uncertain. Superconducting qubits face fundamental limits for scalability and practical uses, and alternative platforms (trapped ions, photonics, neutral atoms) are unproven. A breakthrough in both physics and engineering akin to the transistor is needed, but nothing remotely realistic is inside.

Fault-tolerance remains a high-stakes gamble.

To Quantum Investors:

Quantum investors note:

  • Timelines: Fault-tolerance is at least decades away, even if it is scientifically possible, with no clear profitability for startups.
  • Physics Uncertainty: Scalability hinges on unproven platforms, risking dead ends.
  • Hype Bubble: Overblown claims inflate valuations, threatening losses.

If investors continue to believe in it, short-term high returns are quite possible. But such short-term reinvestment returns will have no bearing on the long-term success of quantum computing as a real technology that solves real problems for humanity.

On the other hand, I admire the bravery of quantum computing investors, because, despite the risk, it is something quite noble to pursue. If the money is wasted, one can take comfort that it is far from being the worst way to waste one’s money.

Conclusion:

Quantum computing is one of humanity’s most audacious scientific quests, but the dreamers in the field face a brutal reality. Concatenated QEC’s exponential costs place fault-tolerance indefinitely away. Current milestones (48 logical qubits, 99.9% gate fidelities) seem to validate the Fault-Tolerance Threshold Theorem on a theoretical basis, but scalability remains a physics and engineering problem, with no platform offering a clear path. Investors chasing quantum’s allure risk losses in a hype-driven market, where NISQ’s near-term promise is unproven, and fault-tolerance awaits a transformative physical phenomenon.

Quantum computing may be a noble quest, but currently, in the hands of the financial elites and scammers, it has been geared up to be just a get-rich-quick scheme, like cryptocurrencies. At least Satoshi’s Bitcoin blockchain itself has been proven to be what Satoshi himself believed it to be, only that the real technology with Satoshi’s vision was hijacked and perverted into a crypto cancer. In contrast, quantum computing still lacks a proven foundation. Stripped of the speculators’ hype, there is highly respectable research and scientific explanation, but I suspect that Satoshi’s Bitcoin blockchain might succeed by overcoming the crypto cancer before quantum computing finds its real landing.

Share
#

Comments are closed

Recent Posts