Integrating Quantum & Classical HPC: Pathways to Scalable Innovation
At Quantum World Congress 2025, a panel moderated by Bob Sutor tackled a practical question with outsized impact: how to integrate quantum processors with high-performance computing (HPC) to unlock scalable innovation. Panelists Steve Brierley (Riverlane), Jan Goetz (IQM), Travis Humble (Oak Ridge National Laboratory), and Masoud Mohseni examined the hardware, software, and operations hurdles that stand between today’s pilots and tomorrow’s hybrid systems.
The through-line: quantum won’t replace HPC—it will augment it as a domain-specific accelerator (a QPU, by analogy to GPUs). That framing shifts the task from hype to engineering: co-design the data paths, schedule heterogeneous resources efficiently, and stabilize latency and jitter so error correction and feed-forward logic can run in real time.
On why HPC (not “just servers”): panelists argued that HPC is where new accelerators land first, then trickle down to the broader compute stack. As quantum systems scale, HPC becomes the measurement yardstick—and the control plane—because many quantum workflows quickly exceed conventional capacity. Hybridization is already emerging in chemistry and materials: use HPC for large-scale simulation and offload the quantum-hard core to a QPU. Examples included embedding methods (treating the broader environment classically while applying high-fidelity quantum models to the active region) and quantum Monte Carlo variants where the bulk workload stays on HPC and target subroutines run on quantum hardware.
On error correction & data movement: reaching utility scale will demand extreme classical throughput—on the order of 10–100 TB/s of detector/control data—processed between logical operations to avoid stalls and decoherence. That implies specialized classical silicon (e.g., ASICs clustered near the cryostat/control racks), tight timing guarantees, and low-jitter interconnects to keep the quantum pipeline fed. The mantra: no ecosystem without a system—and no system without a classical backbone.
On operations & scheduling: cloud-only access creates a “double-queue” penalty (one queue for HPC, another for cloud quantum), wasting cycles and frustrating users. On-prem or tightly peered deployments let centers experiment with co-scheduling, resource allocation, and network placement to meet real-time constraints. Practicalities matter: power, cooling, EMI/vibration, safety, and even chemical handling in data halls influence what “integration” really means.
On software & toolchains: today’s quantum SDKs (built for standalone devices) don’t yet mesh with HPC job schedulers, workflow managers, and security models. The panel called for end-to-end orchestration that looks like HPC—not a bolt-on. Expect rapid co-evolution as error-corrected stacks arrive and hybrid workflows mature.
Finally, the panel looked ahead to cost, form factor, and deployment models. Expect both cloud and on-prem quantum to coexist; the balance will track TCO, power budgets, and workload criticality. The destination is clear: hybrid architectures where HPC + AI + QPU operate as one fabric, enabling problems we don’t attempt today—from beyond-Born–Oppenheimer chemistry to physical-AI simulations that need quantum-level fidelity.
Session Photos














