[NAAPO Logo]

North American AstroPhysical Observatory (NAAPO)

Issue 2 Cover

Cosmic Search: Issue 2
(Volume 1 Number 2; March 1979)
[Article in magazine started on page 35]

Generalized Life
By: Jerome Rothstein

Can matter evolve, even under conditions very different from those on the earth, into configurations or patterns, static or dynamic, which can be called life-like in some sense?

Jerome Rothstein leads you by well-reasoned thermodynamic arguments to the conclusion that many life forms could evolve in the cosmos that we would have difficulty recognizing. In the broadest sense, he describes generalized intelligent life forms as self-replicating, computer-controlled heat engines that are able to play survival games. — Eds.

Introduction

Photo of Jerome Rothstein Can matter evolve, under conditions very different from those obtaining at any time in the history of the earth, into patterns or configurations which can be called life-like in some sense? What kinds of behavior should we call life-like? What kinds of behavior should we call intelligent? How can we avoid the trap of framing our concepts in a manner so closely tied to our earth experience and environment that we would not even recognize exotic life forms in the cosmos if we found them? What kinds of generalized life might exist?

Such questions run very deep indeed. Trying to obtain solid scientific answers entails cutting across disciplines seldom yoked together in the past. Physics, chemistry, psychology, philosophy, are but a few. Whether we need to borrow concepts from ecology, organization theory, management science, sociology or game theory, is an open question. The possibility that new laws or principles are operative is always present.

We must cast a wide net, rely on the most universal and best established laws we can, and try to keep our speculations both free and disciplined. By this I mean not fettered by old habits and prejudices, yet disciplined by an appreciation of the great work of those who constructed the foundations on which we stand. Striking a proper balance between freedom and discipline is hard to do, and it sometimes takes the insight and courage of a Galileo, Einstein or Freud to do it.

The laws of thermodynamics are probably the closest approximation we have in science to "eternal verities." The great revolutions in physics accompanying quantum mechanics and relativity, for example, left them essentially untouched even though the rest of physics underwent tremendous conceptual upheaval. There seems to be well-nigh universal agreement that although thermodynamics will doubtless undergo extensive development in its applications to new areas such as biology, complex systems, and systems not in equilibrium, its existing solid core will be preserved intact. We therefore base our search on the first and second laws of thermodynamics, particularly on extensions of the latter. Other laws of physics will provide harmony, counterpoint, and thematic variations on an essentially thermodynamic melody.

Thermodynamics and Evolution

The first law of thermodynamics is often stated as the law of conservation of energy, i.e., energy can neither be created nor destroyed, only changed from one form to another. Its significance goes beyond this for conservation of energy, with dissipative processes (like friction) excluded, is a theorem in both mechanics and electrodynamics. The first law applies even when processes like friction make mechanical energy seem to disappear. The new form in which the energy appears is "heat", and many have taken quantity of heat as actually being defined by means of the first law. Heat is not mechanical energy; it is not measured by means used to obtain mechanical information. If this were not the case there would be no need to set up the first law as the basis of a new discipline. It would only be a theorem of mechanics! Thermodynamics came into being because of the great practical importance of phenomena usually idealized away and neglected in mechanics.

The second law of thermodynamics can be popularly characterized by saying that processes like friction have an inherently one-way aspect. For example, a spinning wheel may be slowed down by friction in its bearing, ultimately coming to rest, leaving the bearing hotter than it was before. But heating the bearing doesn't make the wheel spin (without auxiliary apparatus and having something else heated in the process). We can always convert "high grade" energy, like mechanical, electrical, or chemical energy, entirely into heat, but there are strong limitations on the conversion of "low grade" heat energy into a high grade form. These are the province of the second law of thermodynamics, of which one form states that in real processes the energy bookkeeping of the world always "enriches" the low grade energy total at the expense of the high grade total, with "breaking even" an ideal limiting case. This is expressed quantitatively in terms of a new thermodynamic quantity, called entropy, which never decreases, remains constant only in ideal cases, and generally increases. When an isolated system comes to equilibrium, its entropy increases to the maximum value possible for the total amount of energy in the system. For the wheel discussed above, this occurs when all the energy of rotation has been turned into heat by friction, and the whole system has reached a uniform temperature (the bearing is no longer a "hot spot").


"Thermodynamics urges, and the structure of the world allows, the evolution of things which can be called heat engines and computers."

What has this to do with the evolution of life? Coming to equilibrium is like dying! The second law says everything is running down, but organic evolution involves the long-term build-up of complex systems from initially simple ones. But the running down of an isolated system is not at all synonymous with the running down of a non-isolated system in permanent communication with a source of energy and a "sink" for waste heat. An engine in that kind of situation can continue to run indefinitely. The second law is satisfied if the total entropy change of the whole system is positive. We can get an "evolution" toward a steadily "running" state instead of to an equilibrium state.

This is what happened here on earth when life evolved, for the spinning globe presents any part (not too near the poles!) alternately to the sun (source) and to space (sink); this cycle drives the "machinery" of vegetation to this day. Animal "machines" use plants or other animals as their energy sources, burn the fuel, and discharge heat and waste to their surroundings (sink). But explaining how an existing machine can function is a far cry from explaining how the machine came to be there in the first place. Remarkably, thermodynamics gives a rationale for both.

Thermodynamic ABCs Living things are active and can do work on their environment. We will therefore advance our insight into their evolution if we can give a thermodynamic justification for the evolution of our system, over the course of time, from a state of no work done on each cycle (Condition A) (see box) to that of a Carnot heat engine (Condition B).

We need to generalize evolution toward equilibrium appropriately to a set of permanently imposed conditions which prevent our system from ever reaching equilibrium. The evolutionary time scale is long compared to a cycle; in our terrestrial case, for example, evolution can take millions of years, while a cycle may be comparable to a day. We can then regard the cycle time as being vanishingly small and re-express the second law case as a positive rate of change of entropy (Condition C). Under constant constraints the system will ultimately be driven to some steady value of the entropy increase rate.

But how is this steady rate related to the initial rate? If our system is initially disorganized, it can be expected simply to take in and then give out equal quantities of heat per cycle, and this is the situation of no work (Condition A). This is the "worst" case, where the rate of increase of entropy is maximum for the source and sink temperatures specified. In equilibrium cases the rate of change of entropy is exactly zero (its minimum possible value) because entropy is maximum. Following Onsager's pioneering work (published in 1931) and since followed up by Prigogine and many others, we now know that in the steady state the rate of change of entropy is a minimum (or close to it). The approach to steady state is thus essentially an evolution toward the smallest rate of entropy production compatible with the nature of the system and the conditions imposed. But this says that insofar as it is possible, the system will evolve toward a combination of a perfect Carnot engine and a concentration of stored high grade energy (or something close to it). This is a highly organized system. The price in entropy increase required for its evolution is paid by degrading some high temperature heat from the source into low temperature heat rejected to the sink.

The theorem of minimum rate of entropy increase need be only a very rough approximation for our purposes. We need an organizing principle, but we do not require perfection. Evolution of an inefficient engine, as long as its efficiency is not zero, suffices.

Structure, Computation, and Life

We now see that thermodynamics allows organized systems to evolve under permanently maintained non-equilibrium conditons, but we want to know what systems have the potential to do so. Many systems lack such ability. A container of water or a metal bar can be alternately heated and cooled with essentially no tendency for one cycle to differ from the next. We assume, of course, that the temperature range is moderate; the container won't rupture, corrode, or leak, the water won't boil away or freeze, the bar won't melt, etc. These provisos suggest where to look, and the simple case where melting and freezing occur within a temperature cycle is a good place to start.


"The evolution of a system simulating life-like behavior is the evolution of life."

Consider a container with ice, water, and salt, in such proportions that all the salt would dissolve if all the ice melted, but if most of the water froze, then salt would crystallize out of solution. Such systems are not only important in old-fashioned home ice-cream freezers, but also in the Zarchin sea water desalination method (under study in Israel). One can choose cycle time, temperature range, container shape, and water to salt ratio to do many tricks just by temperature cycling alone. For example, salt introduced in one location could gradually disappear and settle elsewhere, and an initially uniform salt solution could eventually develop high and low salinity regions. Now electric cells can be built using such differences in concentration of dissolved salts. Our nerves and electric eels use them. So even this simple system soon evolves into something capable of storing some energy (in this case chemical). The heat flow through it permits it to do the work involved in changing salt concentrations in parts of an initially uniform solution.

The key feature is the phase change (solid to liquid and back). Salt behaves differently in ice (solid) and water (liquid). A change in how much salt is dissolved is "switched on and off" by changing from solid to liquid and back. The essential point here is control over some property in the sense that it can be switched between (at least) two different levels, where the change in property becomes manifest as a change in behavior. When it is present the possibility of cumulative effects arises, whereby cyclic change in state can result in evolution to a different behavior.

Several remarks should be made at this point to emphasize how general and powerful are the principles exemplifed in this simple example. First of all, in the present context there are many things equivalent to phase change. Gas becomes liquid when gas molecules become clusters of molecules which continually break up and reform, the change being essentially complete when the clusters are very large, on the average, and essentially no molecules spend more than a negligible fraction of the time free of cluster-mates. Cluster formation involves bonding together of molecules, analogous to bonding of atoms into molecules or to cross-linking of polymer chains in a liquid to form a solid plastic. Chemical reactions can be reversed, bonds can be broken as well as formed, the formation and dissolution of cross-linkages can change a viscous liquid to a jelly-like solid and back (used for locomotion by amoebae by extending and retracting pseudopodia) and processes rapid in one situation may be slowed or prohibited altogether in the other. Switching and control are possible in a large variety of situations, including many of exquisite sensitivity to changes in external parameters. In our example that parameter was temperature, but almost any physical quantity will do in an appropriate system (e.g. pressure, chemical concentration, stress, electric or magnetic fields, etc.).

The second remark is that cumulative effects often involve what can, with justice, be called memory (information storage), where the system records its history in the sense that its present properties reflect what has been done to it previously. After each such change we start a new "ballgame", so to speak. Again simple systems often have this property. If one bends a metal bar and straightens it, it is more difficult to bend it again. This work-hardening effect, well known to the village blacksmith and to ancient sword-makers, is still of paramount importance in construction and manufacture.

The third remark is that these two things taken together give us the basic elements of a computer. It is possible to interconnect large numbers of switches (control elements) and information storage units in such a way that the system as a whole can carry out any procedure, be it computation, data processing, or anything representable in such terms. Of course everything must be encoded in terms of quantities appropriate to the particular system, but given this, any describable input-output behavior can be modeled (simulated) by such a system.

It may now not seem like such a reckless extrapolation to say two things: First, thermodynamics urges, and the structure of the world allows, the evolution of things which can be called heat engines and computers. When sufficiently developed they could realize behavior of almost arbitrary complexity. If we could but describe what we mean by life-like or intelligent behavior, we could then see it modeled in the behavior of such a system. Secondly, we suggest that this is the basic idea needed to explain the origin of life and the eventual evolution of intelligence. After all, any physical system is an analog computer programmed to simulate its own actual behavior (which it does to perfection!). The evolution of a system simulating life-like behavior is the evolution of life.

Generalized Life

Life as we know it is chemically based. Information is stored largely in the specific composition and ordering of a small number of chemically distinguishable units in long chain molecules. Control (switching) is done largely by what chemists call catalysis. But thermodynamics applies to all systems in which concepts like work, energy, heat, and temperature are relevant. The tendency toward the evolution of "well-informed heat engines" discussed above, being purely thermodynamical, is independent of what specific kind of thermodynamical system is under consideration. All that is required, essentially, is information storage and control, no matter how it is done. Chemical configuration is only one means of storing information, and catalysis merely a control mechanism appropriate to that means. There are mechanical, electrical, and magnetic configurations capable of storing information, as in phonograph records or punched cards, in ferroelectric materials, or magnetic tapes.

Let us consider now some specific exotic systems in which evolution in complexity might conceivably lead to something like life. Our stability principles, which include both stable static configurations and stable dynamic configurations, originate, on the fundamental level of quantum mechanics, from two fundamental concepts. These are stationary conditions or states and the exclusion principle. The first gives both static and dynamic stability (it was conceived of by Bohr before quantum mechanics developed in its modern form). The second, discovered by Pauli, was applied by Bohr to building up the periodic table of all chemical elements. Briefly, in an atom or other system, no two electrons can be in the same state. So in building up heavier atoms by adding more electrons outside the nucleus, the electrons must go into states of higher momentum and energy (Bohr called them orbits originally). Carried further it explains many facts of atomic and molecular structure including stability, size, shape, and many other properties. Modern solid state physics and electronics are similarly based on quantum mechanics, and the field of quantum biology is now also rapidly expanding. Transitions between stationary states can occur, like that from the lowest energy state (ground state) to a higher state when the system absorbs energy. By emitting energy it can return to the ground state, frequently by way of states of intermediate energy, or it may end up in a metastable state (one of higher energy than the ground state but with low probability of jumping to the ground state). With metastable states comes energy storage. If this stored energy can be made available in organized, rather than chaotic fashion, we have the foundation for heat engines. This is dramatically demonstrated by lasers, and many other examples can be cited. In lasers, emission of radiation, whose energy has been previously stored in metastable states, is done in organized fashion with the radiation acting as its own catalyst. The first spontaneously emitted photons trigger the production of others which are coherent with them (i.e. not disorganized relative to them); this process, called stimulated emission, was predicted by Einstein in 1917.

We are now ready to consider specific systems. We will not discuss any life possibilities based on carbon or other ordinary chemistries because they have monopolized our thinking in the past. Rather we will look for possibilities where life, as ordinarily conceived, is impossible. Among these are frozen regions, like the surface of Jupiter, and hot regions, like stars. We begin with the latter, starting with neutron stars.

Neutron stars are small dense rapidly-rotating objects, perhaps ten miles in diameter, yet with a mass perhaps a quarter that of the sun. They have densities comparable to and higher than nuclear matter, possess tremendous magnetic fields, and are thought to be the objects known as pulsars. The surface of a neutron star marks a transition from ordinary densities to perhaps 10,000 times that of water, and consists mainly of iron 56, the isotope which is the end-point of nuclear burning. The temperature is of the order of 100,000 degrees (kelvin). In the crust, thought to be solid, densities and temperatures rise rapidly by factors of the order of 1000, increasing even more in deeper layers. Nuclei will exist in a neutron sea, the way metal ions exist in an electron sea in ordinary solid metals, and the deeper one goes, the more particles the average nucleus will contain. Eventually the crust nuclei, electrons and neutrons all become a neutron liquid with very large nuclei "dissolved" in it. These neutron-rich nuclei contain thousands of particles compared to the hundreds at most in ordinary matter. Eventually they become big enough, as one goes deeper, to overlap and form what can be called a "macronucleus" or super dense liquid consisting of neutrons, a few percent of protons, and a corresponding number of electrons. As the density increases still further, new heavy particles can be formed (baryons, hyperons), and it is thought that neutron stars have a dense hyperon core (perhaps ten billion times as dense as the surface).

What has this to do with generalized life? In the liquid core one has a transition region between small and large nuclei. The largest ones contain more particles than we humans contain atoms, and the small ones go down to very few particles. We have here the possibility of a "solution chemistry" of nucleons analogous to the aqueous solution chemistry of amino acids, proteinoids and proteins from which life evolved. The arguments given earlier from thermodynamics and quantum ideas are just as applicable to this case as to the chemical case. In principle, therefore, the theoretical basis for the origin of life is present in neutron stars just as it was on earth. We need only substitute big nuclei for big molecules, neutrons for water, and let our imaginations go.

There are other possibilities too. Who knows how many kinds of hyperons might be encountered deep inside the core? Mightn't there be concentric spheres of corresponding "big hyperon" solution chemistries? And going outward again, mightn't there be a region of cooperative effects involving iron 56 polymeric compounds? Magnetic field strength can serve as a thermodynamic variable just as pressure can, and on neutron stars, particularly at the surface, it will have to play a decisive role.

White dwarf stars are not as dense as neutron stars, and allow fewer possibilities for nucleon or hyperon chemistry. At their cores, however, conditons may be extreme enough to allow some situations to develop like those at the outer layers of neutron stars. Also, if they or other stars are sufficiently rich in elements appreciably heavier than hydrogen, it is possible to envisage concentric spheres each with its own solution chemistry of atomic cores, partially stripped of their electron shells, in an electron sea. These could provide a spectrum of possibilities between ordinary chemistry and nucleon or hyperon "chemistry."

The foregoing situations are hot and heavy, but their opposites, cold and light, admit similar possibilities. Fundamental particles generally have a magnetic moment, or spin; they are like tiny magnets. Interactions between spins of different atomic or molecular systems are generally weak and drop off rapidly with increased distance between the systems. Within nuclei, atoms, and molecules, on the other hand, they are relatively strong, tending to line up antiparallel, cancelling out their magnetic effect. This explains why most substances are non-magnetic under ordinary conditions. But at very low temperatures even weak interactions can be felt because general thermal agitation of atoms and molecules has been reduced. Many substances which are non-magnetic at ordinary temperatures become ferromagnetic or antiferromagnetic when the temperature is low enough. So it may not be absurd to imagine that complex magnetic systems could evolve somewhere in cold space, possibly even as close as Jupiter. That planet has high magnetic fields, a cold surface, a solar energy source including ultra-violet and x-rays, and internal sources also.

It is possible to speculate about superconductivity and superfluidity along the same lines as just done with spin systems. The large variety of vortex systems in superfluids can rival that of big molecular systems in chemistry. They might even coexist with the magnetic systems just discussed. In addition superfluid systems might well exist in neutron stars! For neutrons in the crust, at sufficiently low densities, should interact attractively to form pairs. This creates a situation analogous to formation of electron pairs, which is the basis of modern superconductivity theory. This topic might well have been discussed earlier, and is clearly important in the biology (!) of neutron stars. It might even be important in deeper layers too, for there may be other kinds of pair formation.

How about large scale low density systems? Magnetic fields permeate space and interact strongly with the tenuous plasma (ionized gas) found there. The motions of magnetic stars (including neutron stars) will do work on the plasma, and the plasma can affect many objects over a large region. Magnetohydrodynamic instabilities, turbulence, and driven steady states suggest that evolution of organized large scale behavior is possible. Refusal to entertain any possibility here of evolution of life-like behavior may be too traditional! But why stop there? Why mightn't one apply a kind of statistical mechanical analysis to galaxies, in which they play a role like that of molecules, and then use the non-linear equations of general relativity to envisage the slow evolution of complex assemblages of galaxies? Might they not tend to become vast well-informed heat engines?

Perhaps this is a good point to terminate this kind of speculation, at least in print. I will say, however, that once you get into the spirit of the game you can cook up more and more possibilities for the evolution of generalized life. I leave it as an exercise for the reader to try this for global wind patterns, ocean currents, and the magnetosphere, with the next assignment to find six more!

Conclusions

What can we say about the questions asked in the introduction, what new ones does the discussion suggest, and are there any unforseen insights now visible?


"In many cases there is simply no way in which we could communicate with them, and even if we could there might be no way for either of us to ever recognize a communication from the other as such."

In my opinion matter can evolve into configurations which can justly be called life-like and do so under many different conditions, in many parts of the universe. I see no reason to believe that terrestrial life forms are at all similar, in their physical make-up, to many theoretically possible strange forms, and I consider their a priori probability of evolving, given only the astrophysicist's big-bang universe, to be no less than ours. In many cases there is simply no way in which we could communicate with them, and even if we could there might be no way for either of us ever to recognize a communication from the other as such. Our neutron star friends might only be able to sense superfluid nucleon currents, for example, and neither matter nor radiation from us could ever penetrate the star to their level. Similarly no information-bearing packet of matter and energy could survive the journey from them to us. The best we might sense is a "glitch" in pulsar rate or the like, which we would explain by other hypotheses. Contact by us with a cold form, like a Jovian spin system, would probably kill it. Differences in size and time-scale, as well as habitat and sensory organs, could make communication impossible. There's no talking to a being whose attention cannot be gotten in less than ten million years, or whose life span is less than a nanosecond. We can only hope to communicate with beings whose life spans differ from ours by at most a small number of orders of magnitude, and who can respond to signals we generate by means of signals we can detect.

Mr. Slow-Mr. Fast

We have said essentially nothing about what kind of behavior should be called life-like or intelligent. What we did was to see how heat engines and computers could evolve, relying on their ability to realize any operationally describable behavior, and thus life-like or intelligent behavior.

Almost any definition we select is open to dispute because of its arbitrary choice of defining conditions. The least parochial definitions which are not so general as to be meaningless that have occurred to me are as follows. Any dynamic pattern in a non-equilibrium system capable of replicating itself will be said to exhibit generalized life-like behavior, with the proviso that the elements of the pattern are part of a higher entropy configuration before combination than after. It fits the Carnot cycle picture earlier described, for the more such Carnot engines are operating, the less the total rate of entropy increase. One can even extend the earlier argument to favor self-replication as the fastest way to achieve minimum rate of entropy generation. Darwinian selection, in a real sense, appears as a kind of thermodynamic law, for in this context thermodynamic evolution favors the most efficient engines.

Intelligent behavior is harder to specify, and the following attempt will probably engender much more criticism than the previous one. A system exhibiting life-like behavior will be said to exhibit intelligent life-like behavior if it can play survival games. More explicitly, it can gather information (by measurement) about its environment and compute a response which preserves it, whereas without such an appropriately computed response, it could be destroyed. This makes a wheat virus, which unsuccessfully attacks a new wheat variety, intelligent by this definition if it can mutate to a new form able to attack the wheat. Many instinctive behaviors or adaptations also become intelligent by this definition. While this is uncomfortable, my attempts to avoid including such cases were more so.

I close with what seems to me to be a pleasing new insight of almost poetic beauty. It is that the gloomy heat-death of the universe, so often thought to be an inescapable consequence of the second law of thermodynamics need not follow at all. To paraphrase Mark Twain, I believe reports of the heat-death of the universe in X billion years are grossly exaggerated. As the universe cools, low-temperature forms of generalized life will be able to evolve. I believe it plausible that cold life will win over heat-death, that from the big bang on, there has been a succession of generalized life forms evolving, that they are still evolving, and that we share the cosmos with them.

Small end logo

Photo of Jerome Rothstein Jerome Rothstein was born in Bronx, New York, in 1918. After receiving bachelor's degrees from the City College of New York and the Jewish Theological Seminary of America and a master's degree in physics from Columbia University (1940) he was with the U.S. Army Research and Development Laboratories from 1942 to 1957 working in the areas of solid state physics and physical electronics. For the next 10 years he was associated with several industrial organizations including Edgerton, Germeshausen and Grier of Bedford, Massachusetts. Since 1967 Rothstein has been a member of the faculty of the Ohio State University where he is now professor of Computer and Information Science and of Biophysics. His publications, covering a wide range of topics, number in the hundreds. In 1977 he received the most original paper award of the International Conference on Parallel Processing and the previous year their best paper award.

HOME

Copyright © 1979-2005 Ohio State University Radio Observatory, North American AstroPhysical Observatory (NAAPO), and Cosmic Quest, Inc.
Designed by Jerry Ehman.
Last modified: October 13, 2005.