Fernández Pena, Anselmo TomásCardama Santiago, Francisco Javier2026-04-172026-04-172026https://hdl.handle.net/10347/46765High-Performance Computing (HPC) has historically served as the primary engine for scientific advancement, enabling complex simulations ranging from meteorological prediction to protein folding or drug discovery. For decades, the exponential growth in computational power was sustained by the empirical axioms of Moore’s Law and Dennard scaling. However, in the last decade, these principles have begun to exhibit unambiguous signs of saturation. The fundamental physical, thermodynamic, and quantum limitations of silicon transistors have compelled the industry to abandon frequency scaling in favor of massive parallelism and architectural heterogeneity. Consequently, contemporary top-tier supercomputers are no longer homogeneous machines, but complex clusters that integrate multicore Central Processing Units (CPU) with specialized accelerators such as Graphics Processing Units (GPU) or FPGAs.engAttribution-NonCommercial-NoDerivatives 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-nd/4.0/quantum computinghigh-performance computingdistributed quantum computingquantum networkquantum software330406 Arquitectura de ordenadores120311 Logicales de ordenadoresDistributed quantum computing integrated into high-performance computing environmentsdoctoral thesisopen access