Foundations & Frontiers

The Fastest Computers in the World

An overview of the supercomputers at the bleeding edge of science.

Anna-Sofia Lesiv

September 12, 2024

The first computers were massive. The ENIAC, the first general-purpose computer, occupied 300 square feet, consumed 150 kilowatts, and processed about 500 floating point operations per second. Stories say that when the ENIAC was switched on at the University of Pennsylvania, the lights in Philadelphia dimmed. 

Of course, the phones in our hands today are billions of times more powerful than the ENIAC was. The A16 chip inside all iPhone 15s can process nearly two trillion floating point operations a second. And while facts like this highlight the incredible arc of miniaturization that has dominated computing since the 1960s, the reality is that massive computing projects have never really left us.

For decades, while some researchers have been working on making circuits smaller and more efficient, others have concentrated on building the fastest and biggest computers physically possible. More importantly, these massively powerful computers, or supercomputers, have been critical to national security from the very beginning – ever since the ENIAC was used to make calculations critical to the production of the first hydrogen bomb. 

Today, everything from performing nuclear test simulations to brute force code breaking requires the use of a supercomputer. The bigger the computer is, the more complicated the codes it can break, and the bigger the problems it can solve. 

In the preceding few decades, the build-out of national supercomputing capability has taken on arms-race-like dynamics. China’s forays into constructing supercomputing infrastructure have remained shrouded in secrecy. There is much speculation over the details and capabilities of China’s Tianhe-3, which many believe rivals the capabilities of top US supercomputers, and was assembled using Chinese-designed chips after a 2023 executive order forbade US chip designers from selling to China.

The most powerful supercomputer that we know of today is Frontier, one of the most recent additions to America’s computing arsenal. It’s a gigantic 7,300-square-foot system consisting of over 9,400 CPUs and nearly 38,000 GPUs arranged across 74 8,000-pound cabinets. 145 kilometers of cable keep all of these nodes connected, while 6,000 gallons of water flow through pumps every minute to cool the system as it grinds away on computations.

The system’s claim to fame is that it is the world’s first exascale computer, meaning it is the first to be able to process over one quintillion floating point operations per second. Oak Ridge National Lab, where Frontier is headquartered, likes to say that if every person on earth could do a single operation, like an addition or a multiplication, every second — it would take them four years to do what Frontier can do in just one second.

Frontier is a massive resource, not only for the national security applications mentioned above but for the scientific community more broadly. There are certain types of problems — modeling fluid dynamics or conducting finite element analysis, for example — that have no closed-form solutions. In other words, it’s not possible to solve how cloud formations will evolve, how air will curl around a new plane design, or how exactly a vehicle’s body will deform when impacted. All of these scenarios need to be simulated, and the more resolution there exists in the simulation, the more accurate the result.

The idea behind building an exascale machine is simply that it will allow scientists to conduct larger, more complex simulations faster, which many hope will allow the rate of scientific discovery to accelerate.

In the past, organizations like the National Oceanic and Atmospheric Administration have benefited from supercomputers by using them to perform weather simulations and achieve more accurate forecasting. Astronomers have long relied on supercomputers to model complex phenomena like galaxy formation, and with time, the aperture on the types of problems increasingly powerful computers can tackle has widened significantly. Coming into scope now is everything from modeling the function of organs from the molecules up to simulating the evolution and mutations of viral vectors like COVID-19 to modeling the operation of the power grid and more.

Being able to say that we have reached the exascale era of computing is certainly an exciting milestone, as it’s a testament to how much more efficient the underlying hardware has gotten. When the first conversations on building out exascale systems started out in 2012, many assumed it to simply be infeasible. At the time, energy consumption estimates for systems of that scale approached 3 gigawatts — the output of three nuclear plants. Today, Frontier is able to run on 21 megawatts

Despite Frontier breaking the exascale barrier only two years ago, Lawrence Livermore National Laboratory is already working on deploying its own supercomputer, El Capitan, which is meant to be able to process two exaflops, sometime in 2024. Meanwhile, the Department of Energy is already gearing up to build yet another exascale system to succeed Frontier called Discovery, which it anticipates will be three to five times more performant. Discovery will be housed at Oak Ridge National Lab, alongside Frontier, and is expected to be delivered by 2028

And while it’s a net positive that the United States is investing in infrastructure that will help research move the needle on some of the most exciting outstanding questions in science, companies like Cerebras are working on solutions that would make traditional supercomputing infrastructure obsolete — for certain applications. Rather than wiring up tens of thousands of GPUs together through elaborate cabling, Cerebras etches this type of interconnection right into the silicon. It’s known for producing wafer-scale chips, the size of dinner plates, with the idea that building out computational complexity on the chip rather than across thousands of square feet of physical space will lead to far more efficient computation. That hunch is being proven right. 

In May 2024, Cerebras demonstrated that its second generation chip, Wafer Scale Engine 2, which has 850,000 cores, could perform molecular dynamics simulations 179 times faster than Frontier by dedicating a processor core for every simulated atom. While this kind of performance currently only applies to molecular structures within a smaller context window, it’s indicative of how powerful a miniaturized approach to supercomputing can be. 

Ambitions of designing supercomputers are as old as computers themselves. They offer the promise of creating machines that can mirror the world inside of themselves, from the interactions of the tiniest particles to the collisions of massive faraway galaxies. Whether those simulations take place in systems the size of football fields or can be confined to chips we can hold in our hands, the milestones being achieved in the field of supercomputing now are exciting goalposts on the unceasing journey to uncover ever-greater resolution into the nature of things.

Disclosure: Nothing presented within this article is intended to constitute legal, business, investment or tax advice, and under no circumstances should any information provided herein be used or considered as an offer to sell or a solicitation of an offer to buy an interest in any investment fund managed by Contrary LLC (“Contrary”) nor does such information constitute an offer to provide investment advisory services. Information provided reflects Contrary’s views as of a time, whereby such views are subject to change at any point and Contrary shall not be obligated to provide notice of any change. Companies mentioned in this article may be a representative sample of portfolio companies in which Contrary has invested in which the author believes such companies fit the objective criteria stated in commentary, which do not reflect all investments made by Contrary. No assumptions should be made that investments listed above were or will be profitable. Due to various risks and uncertainties, actual events, results or the actual experience may differ materially from those reflected or contemplated in these statements. Nothing contained in this article may be relied upon as a guarantee or assurance as to the future success of any particular company. Past performance is not indicative of future results. A list of investments made by Contrary (excluding investments for which the issuer has not provided permission for Contrary to disclose publicly, Fund of Fund investments and investments in which total invested capital is no more than $50,000) is available at www.contrary.com/investments.

Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Contrary. While taken from sources believed to be reliable, Contrary has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Please see www.contrary.com/legal for additional important information.

By navigating this website you agree to our privacy policy.