How is energy spent in computation?

Nadia Milazzo
April 27, 2023
9 minutes
Share this post

In the mindset of a computer scientist, every problem requires an algorithm which solves it, and every algorithm requires resources to run. Those resources are the object of any computational analysis, and they usually correspond to time, space and energy. A lot of debates and discussions around energy consumption in quantum computation animate the community at present — see e.g. the Quantum Energy Initiative launched in 2022 — but here we will take a step back and look at the fundamentals of how energy is spent in both classical and quantum computation, in a more physical mindset. This will be the occasion to recall how beautifully physics, information and computation theories connect, revealing fascinating interpretations of fundamental concepts in all those worlds.

Logical reversibility

Elementary operations in classical and quantum computation are often described by logic gates performing Boolean functions on one or more inputs. Let’s take two among the simplest classical ones: the NOT and the AND gate; the first one takes a single bit as input and outputs its opposite (it performs logical negation), while the second one takes two inputs in and outputs only one after performing logical conjunction. We present the so-called truth tables for these two simple gates in Fig. 1.

Fig. 1 Truth tables for the NOT and the AND gate.

What if we want to retrieve the input from those outputs? It clearly appears that we can always unambiguously do that for the NOT gate, but this is not true anymore if we get a 0 from the AND gate, since it corresponds to three different inputs — 00, 01 and 10. We call the NOT gate a reversible gate, while the AND gate is an irreversible one. The ability to retrieve the output from the input and viceversa is only one of the possible descriptions of reversibility; we can simply say that a reversible operation can be undone or, in more mathematical terms, that it is bijective, but the point of view that most interests us here is the following one: in a reversible operation no information is lost.

Information is physical — Rolf Landauer

In 1961 Landauer published a work [1] in which he presented how irreversibility can be connected to the energy spent in computation, showing how in particular the erasure of information dissipates energy into the environment. Even further, this amount of energy can be lower bounded, leading to what is nowadays known as the Landauer’s principle:

Suppose a computer erases a single bit of information. The amount of energy dissipated into the environment is at least k T ln2, where k is a universal constant known as Boltzmann’s constant, and T is the temperature of the environment of the computer. [2]

The statement above can be equivalently expressed in terms of entropy, given its thermodynamic definition: erasing one bit of information increasesthe entropy of the environment by at least k ln 2. After having been extensively challenged and debated, Landauer’s principle was finally experimentally verified in different classical settings [3–5] and further extended also to a quantum one [6]. Moreover, deeper investigations on the connection between logical and thermodynamic reversibilities also led to extensions of Landauer’s principle [7]. But what about the achievability of such a bound? Are modern computers anywhere close to it? At room temperature (≃ 25° C), the energy dissipated would at least be 18 meV, a very small amount (for example, when a single molecule of gas methane is combusted, the energy released — as heat — is 500 times bigger)! Current technology does much worse than that [2, 8], and even though advances have been made in this direction, e.g. with nanomagnet memories [9], some scientists believe that the semiconductor miniaturization used in modern computers is soon going to stop providing greater energy efficiency [8]. What is needed is instead a revolution in the fundamentals of computation: How could that look like?

Reversible computing

We have seen that the energy dissipated in elementary computation is related to the irreversibility of operations, or in other terms, to the loss of information. Can we then improve the energy efficiency using a reversible model of computation? From a physical point of view, the answer is simply yes! There is no fundamental reason why we cannot go below the bound set by Landauer’s principle. While this is great news, the actual design and manufacturing of practical reversible computing hardware is not simple at all. Research is being conducted on the topic (see e.g. [8,10] and references therein), but here we will only mention a very simple model to get an idea of what reversible computing might look like and what challenges it presents. The model of a “billiard-ball” computer dates back to the beginning of the ’80s, when Fredkin and Toffoli (Yes, as the well-known gates! ^^) published their work on conservative logic: a model of computation which mirrors physics much more closely than the others.

Fig. 3 a) The balls can occupy different sites in the grid. b) They can collide in an elastic way. c) We can deflect the trajectories of the balls.

As represented in Fig. 3, the billiard balls can occupy or not a certain site; to make the correspondance with a standard model of computation, we can think of the presence or absence of a ball as the encoding of a signal, representing a logical 1 or 0. Moreover, the balls move in a friction-free environment; the collisions that happen when they cross paths and the corresponding outputs simulate the action of gates in a circuit model. It can be shown that this extremely simple model is universal, meaning that it can simulate any computation possible in the standard circuit model (more precisely, the billiard-ball computer can implement a Fredkin gate — also known as controlled SWAP gate — which is universal). Other models of reversible computation can be based instead on the Toffoli gate, more known also in quantum computation to be universal when combined with the single qubit Hadamard gate. Without going in too many details, the overhead of those reversible computations with respect to irreversible ones is the number of ancilla bits which are required to allow reversibility; this number scales anyway at most linearly with the number of gates in the irreversible circuit.

However, more difficult challenges stand in the way of reversible computers implementation. If we look closer at the billiard-ball model, we can soon realise that it is highly unstable, since the paths followed by a ball moving without friction will be “disturbed” by any small perturbations, destroying the assumption of the implementation of perfect operations. We could correct those errors, but this would require then erasing the information associated with such protocols: so here we are again, spending energy to reduce any external noise source! All the discussion above must sound so familiar to any reader who has, at least once, crossed an article on a much newer, complicated and fascinating technology… Yes, quantum computing suffers exactly of the same problems as the simplest model of reversible computation we could think of! It is indeed reversible since it uses only unitary operations which act on qubits and give rise to the so-called quantum circuit model of computation. For example, the NOT gate discussed at the beginning of this article finds its corresponding quantum version in the Pauli-X gate (with respect to the computational basis {|0⟩, |1⟩}). Quantum computers are very susceptible to external perturbations (they must be cooled down to fight e.g. thermal noise) and they also rely on perfect gate implementation to achieve reliable computations. Once again, the energy consumed in quantum computation is due to the need of reducing all the possible sources of noise.

This quite long (but hopefully pleasant ^^) detour on reversible computing led us to the last part of this article, where we will again have the opportunity to explore from a physical point of view what could look like a purely computational problem, and go all the way down to the fundamental laws of thermodynamics.

Quantum Error Correction and the Maxwell’s demon

Errors induced by any form of noise must at some point be corrected to do quantum information processing reliably. The field of quantum error correction (QEC) is wide and rich, and researchers are unstoppably working to make it always closer and closer to reality to provide fault-tolerant quantum computing. We will not discuss here any QEC scheme in particular and focus exclusively on what are their generic energy requirements.

Quantum error-correction may be thought of as a type of refrigeration process, capable of keeping a quantum system at a constant entropy, despite the influence of noise processes which tend to change the entropy of the system. [2]

This unusual interpretation of QEC allowing a reduction of entropy may sound at first a bit worrying to physicists passionate about thermodynamics; indeed, following its second law, we know that the entropy of a closed system can never decrease! Where is the catch? In 1871 Maxwell conjectured for the first time a little “demon” who could violate this law in a very simple scenario, represented in Fig. 4.

Fig. 4 Maxwell’s thought experiments of a little demon able to violate the second law of thermodynamics reducing the entropy of a system.

We imagine a gas at equilibrium inside a container with two different chambers separated by a wall; the little demon has access to a small door on the wall that he can open at his will and let the gas particles go from one side to the other. He observes the particles moving and when he sees a fast particle approaching the door, it opens it to let it through. By repeating this several times, he could separate the fast particles from the slow ones, thus decreasing the total entropy of the system. This paradox kept scientists busy for many years [12], until they realised how to defeat the tricky demon: to know the velocity of the gas particles, he had to measure them! The act of measurement requires acquisition of information, which has to be stored somewhere, i.e. in the demon’s memory. Unfortunately, all the memories are finite, also demons’ ones, so he will have at some point to start erasing the collected information to make space for new one. We now know very well that by Landauer’s principle this erasure increases the entropy of the total system composed by the demon, the gas and their environment, thus compensating the efforts of the demon in decreasing it. The second law of thermodynamics is finally safe!

If we now substitute the demon’s memory with a computer one, and the velocity measurements with the so-called syndrome measurements of QEC (to understand which errors occur), we can soon realise that the refrigeration process represented by QEC is simply an analogue of Maxwell’s demon protocol. Indeed, the bits used to store the measurement results needed for error correction must eventually be erased; this can be more precisely proven and quantified by using fundamental theorems in the theory of information, but we leave the curious reader refer to [2] for the full proof.

We conclude this article by stressing what is the take-home message here and why it is important: in principle all computation can be done reversibly at zero energy cost, in practice we still spend energy trying to make systems stable and to protect them from noise. It is then very important to continue studying and searching for new techniques against noise [13], since this will improve the energy efficiency of any computing device.

At ColibrITD

We work to understand what noise mitigation techniques can be useful to achieve reliable computation in the NISQ era and we search for precise estimations of energy consumption for current hardware to provide useful benchmark of quantum technologies.


[1] R. Landauer, “Irreversibility and Heat Generation in the Computing Process.” IBM Journal of Research and Development, vol. 5, no. 3, pp. 183–191 (1961).

[2] Nielsen, M. A., & Chuang, I. “Quantum computation and quantum information” (2002).

[3] Bérut, A., Arakelyan, A., Petrosyan, A. et al. “Experimental verification of Landauer’s principle linking information and thermodynamics.” Nature 483, 187–189 (2012).

[4] Yonggun Jun, Momčilo Gavrilov, and John Bechhoefer, “High-Precision Test of Landauer’s Principle in a Feedback Trap.” Phys. Rev. Lett. 113, 190601 (2014).

[5] Hong, Jeongmin, et al. “Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits.” Science advances 2.3 (2016).

[6] Gaudenzi, R., Burzurí, E., Maegawa, S. et al. “Quantum Landauer erasure with a molecular nanomagnet.” Nature Phys 14, 565–568 (2018).

[7] Takahiro Sagawa, “Thermodynamic and logical reversibilities revisited”, J. Stat. Mech. (2014).

[8] Frank, M.P. “The Future of Computing Depends on Making It Reversible.” IEEE Spectrum (2017).

[9] Brian Lambson, David Carlton, and Jeffrey Bokor, “Exploring the Thermodynamic Limits of Computation in Integrated Systems: Magnetic Memory, Nanomagnetic Logic, and the Landauer Limit.”, Phys. Rev. Lett. 107, 010604 (2011).

[10] Frank, M.P. “Foundations of Generalized Reversible Computing.” In Phillips, I., Rahaman, H. (eds) Reversible Computation. RC 2017. Lecture Notes in Computer Science(), vol 10301. Springer, Cham. (2017).

[11] Fredkin, E., Toffoli, T. “Conservative logic.” Int J Theor Phys 21, 219–253 (1982)

[12] Bennett, Charles H. “Demons, Engines and the Second Law.” Scientific American 257, no. 5 (1987).

[13] Reina, M. “Can we really deal with bit flip?” Medium (2022).

Nadia Milazzo

Discover the Latest Blog Posts

Stay informed with our insightful blog content.