Unlocking Complex Problems: Jan Goetz on Quantum–HPC Integration and What Comes Next

In a focused keynote, Jan Goetz, CEO & Co-Founder of IQM Quantum Computers, explained how IQM is integrating quantum systems directly into high-performance computing (HPC) environments to tackle complex workloads—prioritizing products that ship to centers now, joint software integration, and a feedback loop that educates quantum teams about HPC and HPC teams about quantum.

Product-first integration

IQM deploys its latest-generation Radiance systems to HPC sites and aligns upgrade roadmaps so centers can keep pace with performance gains. A smaller Spark system supports universities and workforce development with a modular, hands-on platform. IQM also offers cloud access for use-case development, but on-prem systems are the workhorse for deep HPC integration.

Designed to live in the datacenter

Under the hood, today’s superconducting machines combine cryogenics (“golden chandeliers”), room-temperature racks, and gas handling—engineered to fit standard 19-inch footprints and facility services (electrical and cooling) common to supercomputing floors. Early deployments showed no degradation from vibration or EMI when placed beside classical gear.

Performance that scales. Goetz highlighted chips where fidelities exceed 99.9% across basic operations and—critically—do not degrade as qubit counts rise, a prerequisite for layering quantum error correction. IQM runs its own in-house fabrication and custom design software, enabling rapid tape-out cycles (every few weeks). The team has also advanced high-connectivity layouts—for example, a qubit coupled to 24 neighbors—which can reduce the physical-to-logical qubit overhead by 10–20× (simulation-dependent) versus standard surface code assumptions.

Two ways to co-process with HPC

  • Iterative (ping-pong): e.g., VQE-style loops where an optimizer on the classical side drives repeated quantum evaluations.

  • Sequential workflows: HPC prepares data; the quantum processor performs a targeted subroutine; results return to HPC for final computation.

Real integrations, real learnings

We bring systems on-prem and integrate them with HPC teams—because both sides have to learn each other’s world.
— Jan Goetz, IQM

IQM first linked a system remotely to Finland’s LUMI supercomputer to establish software pathways; it then installed on-prem systems at sites like LRZ (Munich) and ORNL’s Oak Ridge environment—validating floor readiness and kicking off site-specific software work. IQM remains framework-agnostic, building adapters to popular stacks (e.g., CUDA-Q/CUDA Quantum, Cirq, Qiskit) and co-developing streamlined pathways with each center.

From chemistry to chip design

Beyond classical-to-quantum co-processing for molecular ground-state problems (achieving chemical accuracy on small test cases), IQM also uses HPC to simulate and co-design its own chips, in partnership with centers like CSC Finland (ELMA project)—a virtuous loop where HPC accelerates quantum R&D, and quantum modules accelerate select HPC workloads.

The takeaway

If you want quantum to matter inside scientific computing, take the machine to the supercomputer, align roadmaps, keep the software open, and engineer fidelity that holds as systems grow.


Session Photos

Next
Next

Quantum Computing in APAC: Building Homegrown Hardware and Hybrid HPC-Quantum Platforms