Sunday, July 16, 2017

Discovering energy

If you get an electricity bill, you're aware that energy is something that can be quantified, consumed, bought and sold.   It's something real, even if you can't touch it or see it.  You probably have a reasonable idea of what energy is, even if it's not a precisely scientific definition, and an idea of some of the things you can do with energy: move things around, heat them or cool them, produce light, transmit information and so forth.

When something's as everyday-familiar as energy it's easy to forget that it wasn't always this way, but in fact the concept of energy as a measurable quantity is only a couple of centuries old, and the closely related concept of work is even newer.

Energy is now commonly defined as the ability to do work, and work as a given force acting over a given distance.  For example, lifting a (metric) ton of something one meter in the Earth's surface gravity requires exerting approximately 9800 Newtons of force over that distance, or approximately 9800 Newton-meters of work altogether.  A Joule of energy is the ability to do one Newton-meter of work, so lifting one ton one meter requires approximately 9800 Joules of energy, plus whatever is lost to inefficiency.

As always, there's quite a bit more going on if you start looking closely.  For one thing, the modern physical concept of energy is more subtle than the common definition, and for another energy "lost" to inefficiency is only "lost" in the sense that it's now in a form (heat) that can't directly do useful work.  I'll get into some, but by no means all, of that detail later in this post and probably in others as well.

I'm not going to try to give an exact history of thermodynamics or calorimetry here, but I do want to call out a few key developments in those fields.  My main aim is to trace the evolution of energy as a concept from a concrete, pragmatic working definition born out of the study of steam engines to the highly abstract concept that underpins the current theoretical understanding of the physical world.



The concept of energy as we know it dates to somewhere around the turn of the 19th century, that is, the late 1700s and early 1800s.   At that point practical steam engines had been around for several decades, though they only really took off when Watt's engine came along in 1781.  Around the same time a number of key experiments were done, heat was recognized as a form of energy and a full theory of heat, work and the relationship between the two was formulated.

What makes things hot?  This is one of those "why is the sky blue?" questions that quickly leads into deep questions that take decades to answer properly.  The short answer, of course, is "heat", but what exactly is that?  A perfectly natural answer, and one of the first to be formalized into something like what we would call a theory, is that heat is some sort of substance, albeit not one that we can see, or weigh, or any of a number of other things one might expect to do with a substance.

This straightforward answer makes sense at first blush.  If you set a cup of hot tea on a table, the tea will get cooler and the spot where it's sitting on the table will get warmer.  The air around the cup also gets warmer, though maybe not so obviously.  It's completely reasonable to say that heat is flowing from the hot teacup to its surroundings, and to this day "heat flow" is still an academic subject.

With a little more thought it seems reasonable to say that heat is somehow trapped in, say, a stick of wood, and that burning the wood releases that heat, or that the Sun is a vast reservoir of heat, some of which is flowing toward us, or any of a number of quite reasonable statements about heat considered as a substance.  This notional substance came to be known as caloric, from the Latin for heat.

As so often happens, though, this perfectly natural idea gets harder and harder to defend as you look more closely.  For example, if you carefully weigh a substance before and after burning it, as Lavoisier did in 1772, you'll find that it's actually heavier after burning.  If burning something releases the caloric in it, then does that mean that caloric has negative weight?  Or perhaps it's actually absorbing cold, and that's the real substance?

On the other hand, you can apparently create as much caloric as you want without changing the weight of anything.  In 1797 Benjamin Thompson, Count Rumford, immersed an unbored cannon in water, bored it with a specially dulled borer and observed that the water was boiling hot after about two and a half hours.  The metal bored from the cannon was not observably different from the remaining metal of the cannon, the total weight of the two together was the same as the original weight of the unbored cannon, and you could keep generating heat as long as you liked.  None of this could be easily explained in terms of heat as a substance.

Quite a while later, in the 1840s, James Joule did precise measurements how much heat was generated by a falling weight powering a stirrer in a vat of water.  Joule determined that heating a pound of water one degree Fahrenheit requires 778.24 foot-pounds of work (e.g., letting a 778.24 pound weight fall one foot, or a 77.824 weight fall ten feet, etc.). Ludwig Colding did similar research, and both Joule and Julius Robert von Mayer published the idea that heat and work can each be converted to the other.  This is not just significant theoretically.  Getting five digits of precision out of stirring water with a falling weight is pretty impressive in its own right.

At this point we're well into the development of thermodynamics, which Lord Kelvin eventually defined in 1854 as "the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency."  This is a fairly broad definition, and the specific mention of electricity is interesting, but a significant portion of thermodynamics and its development as a discipline centers around the behavior of gasses, particularly steam.


In 1662, Robert Boyle published his finding that the volume of a gas, say, air in a piston, is inversely proportional to the pressure exerted on it.  It's not news, and wasn't at the time, that a gas takes up less space if you put it under pressure.  Not having a fixed volume is a defining property of a gas.  However, "inversely proportional" says more.   It says if you double the pressure on a gas, its volume shrinks by half, and so forth.  Another way to put this is that pressure multiplied by volume remains constant.

In the 1780s, Jacque Charles formulated (but didn't publish) the idea that the volume of a gas was proportional to its temperature.  In 1801 and 1802, John Dalton and Joseph Louis Guy-Lussac published experimental results showing the same effect.  There was one catch: you had to measure temperature on the right scale.  A gas at 100 degrees Fahrenheit doesn't have twice the volume of a gas at 50 degrees, nor does it if you measure in Celsius.

However, if you plot volume vs. temperature on either scale you get a straight line, and if you put the zero point of your temperature scale where that line would show zero volume -- absolute zero -- then the proportionality holds.  Absolute zero is quite cold, as one might expect.  It's around 273 degrees below zero Celsius (about 460 degrees below zero Fahrenheit).  It's also unobtainable, though recent experiments in condensed matter physics have come quite close.

Put those together and you get the combined gas law: Pressure times volume is proportional to temperature.

In 1811 Amedeo Avogadro hypothesized that two samples of the same gas at the same temperature, pressure and volume contained the same number of molecules.  This came to be known as Avogadro's Law.  The number of molecules in a typical system is quite large.  It is usually expressed in terms of Avogadro's Number, approximately six followed by twenty-three zeroes, one of the larger numbers that one sees in regular use in science.

Put that together with the combined gas law and you have the ideal gas law:
PV = nRT
P is the pressure, V is the volume, n is the number of molecules, T is the temperature and R is the gas constant that makes the numbers and units come out right.

The really important abstraction here is state.  If you know the parameters in the ideal gas law -- the pressure, volume, temperature and how many gas particles there are, then you know its state.  This is all you need to know, and all you can know, about that gas as far as thermodynamics is concerned.  Since the number of gas particles doesn't change in a closed system like a steam engine (or at least an idealized one), you only need to know pressure, volume and temperature to know the state.

Since the ideal gas law relates those, you really only need to keep track of two of the three.  If you measure pressure, volume and temperature once to start with, and you observe that the volume remains constant while the pressure increases by 10%, you know that the temperature must be 10% higher than it was.  If the volume had increased by 20% but the pressure had dropped by 10%, the temperature must now be 8% higher (1.2 * 0.9 = 1.08).  And so forth.

You don't have to track pressure and volume particularly, or even two of {pressure, volume, temperature}.  There are other measures that will do just as well (we'll get to one of the important ones in a later post), but no matter how you define your measures you'll need two of them to account for the thermodynamic state of a gas and (as long as they aren't essentially the same measure in disguise), two will be enough.  Technically, there are two degrees of freedom.

This means that you can trace the thermodynamic changes in a gas on a two-dimensional diagram called a phase diagram.  Let's pause for a second to take that in.  If you're studying a steam engine (or in general, a heat engine) that converts heat into work (or vice versa) you can reduce all the movements of all the machinery, all the heating and cooling, down to a path on a piece of paper.  That's a really significant simplification.


In theory, the steam in a steam engine (or generally the working fluid in a heat engine), will follow a cycle over and over, always returning to the same point in the phase diagram (that is, the same state).    In practice, the cycle won't repeat exactly, but it will still follow a path through the phase diagram that repeats the same basic pattern over and over, with minor variations.

The heat source heats the steam and the steam expands.  Expanding means exerting force against the walls of whatever container it's in, say the surface of a piston.  That is, it means doing work.  The steam is then cooled, removing heat from it, and the steam returns to its original pressure, volume and temperature.  At that point, from a thermodynamic point of view, that's all we know about the steam.  We can't know, just from taking measurements on the steam, how many times it's been heated and cooled, or anything else about its history or future.  All you know is its current thermodynamic state.

As the steam contracts back to its original volume, its surroundings are doing work on it.  The trick is to manipulate the pressure, temperature and volume in such a way that the pressure, and thus the force, is lower on the return stroke than the power stroke, and the steam does more work expanding than is done on it contracting.  Doing so, it turns out, will involve putting more heat into the heating than comes out in the cooling.  Heat goes in, work comes out.


This leads us to one of the most important principles in science.  If you carefully measure what happens in real heat engines, and the ways you can trace through a path in a phase diagram, you find that you can convert heat to work, and work to heat, and that you will always lose some waste heat to the surroundings, but when you add it all up (in suitable units and paying careful attention to the signs of the quantities involved), the total amount of heat transferred and work done never changes.  If you put in 100 Watts worth of heat, you won't get more than 100 Watts worth of work out.  In fact, you'll get less.  The difference will be wasted heating the surroundings.

This is significant enough when it comes to heat engines, but that's only the beginning.  Heat isn't the only thing you can convert into work and vice versa.  You can use electricity to move things, and moving things to make electricity.  Chemical reactions can produce or absorb heat or produce electrical currents, or be driven by them.   You can spin up a rotating flywheel and then, say, connect it to a generator, or to a winch.

Many fascinating experiments were done, and the picture became clearer and clearer: Heat, electricity, the motion of an object, the potential for a chemical reaction, the stretch in a spring, the potential of a mass raised to a given height, among other quantities, can all be converted to each other, and if you measure carefully, you always find the total amount to be the same.

This leads to the conclusion that all of these are different forms of the same thing -- energy -- and that this thing is conserved, that is, never created or destroyed, only converted to different forms.


As far-reaching and powerful as this concept is, there were two other important shifts to come.  One was to take conservation of energy not as an empirical result of measuring the behavior of steam engines and chemical reactions, but as a fundamental law of the universe itself, something that could be used to evaluate new theories that had no direct connection to thermodynamics.

If you have a theory of how galaxies form over millions of years, or how electrons behave in an atom, and it predicts that energy isn't conserved, you're probably not going to get far.  That doesn't mean that all the cool scientist kids will point and laugh (though a certain amount of that has been known to happen).  It means that sooner or later your theory will hit a snag you hadn't thought of and sooner or later the numbers won't match up with reality*.  When this happens over and over and over, people start talking about fundamental laws.


The second major shift in the understanding of energy came with the quantum theory, that great upsetter of scientific apple carts everywhere.  At a macroscopic scale, energy still behaves something like a substance, like the caloric originally used to explain heat transfer.  In Count Rumford's cannon-boring experiment, mechanical energy is being converted into heat energy.  Heat itself is not a substance, but one could imagine that energy is, just one that can change forms and lacks many of the qualities -- color, mass, shape, and so forth -- that one often associates with a substance.

In the quantum view, though, saying that energy is conserved doesn't assume some substance or pseudo-substance that's never created or destroyed.  Saying that energy is conserved is saying that the laws describing the universe are time-symmetric, meaning that they behave the same at all times.  This is a consequence of Noether's theorem, one of the deepest results in mathematical physics, which relates conservation in general to symmetries in the laws describing a system.  Time symmetry implies conservation of energy.  Directional symmetry -- the laws work the same no matter which way you point your x, y and axes -- implies conservation of angular momentum.

Both of these are very abstract.  In the quantum world you can't really speak of a particle rotating on an axis, yet you can measure something that behaves like angular momentum, and which is conserved just like the momentum of spinning things is in the macroscopic world.  Just the same, energy in the quantum world has more to do with the rates at which the mathematical functions describing particles vary over space and time, but because of how the laws are structured it's conserved and, once you follow through all the implications, energy as we experience it on our scale is as well.

This is all a long way from electricity bills and the engines that drove the industrial revolution, but the connections are all there.  Putting them together is one of the great stories in human thought.

* I suppose I can't avoid at least mentioning virtual particles here.  From an informal description, of particles being created and destroyed spontaneously, it would seem that they violate conservation of energy (considering matter as a form of energy).  They don't, though.  Exactly why they don't is beyond my depth and touches on deeper questions of just how one should interpret quantum physics, but one informal way of putting it is that virtual particles are never around for long enough to be detectable.  Heisenberg uncertainty is often mentioned as well.

1 comment:

  1. Very nice. I somehow missed this one until I followed the link from the entropy post, but this is excellent exposition. Even I can almost grasp it.

    ReplyDelete