The Thermodynamics of 9-11

When hijacked airliners crashed into the tall Towers of the World Trade Center, in New York City [on 11 September 2001], each injected a burning cloud of aviation fuel throughout the 6 levels (WTC 2) to 8 levels (WTC 1) in the impact zone. The burning fuel ignited the office furnishings: desks, chairs, shelving, carpeting, work-space partitions, wall and ceiling panels; as well as paper and plastic of various kinds.

How did these fires progress? How much heat could they produce? Was this heat enough to seriously weaken the steel framework? How did this heat affect the metal in the rubble piles in the weeks and months after the collapse? This report is motivated by these questions, and it will draw ideas from thermal physics and chemistry. My previous report on the collapses of the WTC Towers described the role of mechanical forces (1).

Summary of National Institute of Technology and Standards (NIST) Report

Basic facts about the WTC fires of 9/11/01 are abstracted by the numerical quantities tabulated here.

Table 1, Time and Energy of WTC Fires

ITEM                              WTC 1           WTC 2
impact time (a.m.)          8:46:30          9:02:59
collapse (a.m.)               10:28:22        9:58:59
time difference               1:41:52          0:56:00
impact zone levels          92-99            78-83
levels in upper block       11                 27
heat rate (40 minutes)     2 GW            1 GW
total heat energy             8000 GJ       3000 GJ

Tower 1 stood for one hour and forty-two minutes after being struck between levels 92 and 99 by an airplane; the block above the impact zone had 11 levels. During the first 40 minutes of this time, fires raged with an average heat release rate of 2 GW (GW = giga watts = 10^9 watts), and the total heat energy released during the interval between airplane impact and building collapse was 8000 GJ (GJ = giga-joules = 10^9 joules).

A joule is a unit of energy; a watt is a unit of power; and one watt equals an energy delivery rate of one joule per second.

Tower 2 stood for fifty-six minutes after being struck between levels 78 and 83, isolating an upper block of 27 levels. The fires burned at a rate near 1 GW for forty minutes, diminishing later; and a total of 3000 GJ of heat energy was released by the time of collapse.

WTC 2 received half as much thermal energy during the first 40 minutes after impact, had just over twice the upper block mass, and fell within half the time than was observed for WTC 1. It would seem that WTC 1 stood longer despite receiving more thermal energy because its upper block was less massive.

The data in Table 1 are taken from the executive summary of the fire safety investigation by NIST (2).

The NIST work combined materials and heat transfer lab experiments, full-scale tests (wouldn’t you like to burn up office cubicles?), and computer simulations to arrive at the history and spatial distribution of the burning. From this, the thermal histories of all the metal supports in the impact zone were calculated (NIST is very thorough), which in turn were used as inputs to the calculations of stress history for each support. Parts of the structure that were damaged or missing because of the airplane collision were accounted for, as was the introduction of combustible mass by the airplane.

Steel loses strength with heat. For the types of steel used in the WTC Towers (plain carbon, and vanadium steels) the trend is as follows, relative to 100% strength at habitable temperatures.

Table 2, Fractional Strength of Steel at Temperature

Temperature, degrees C      Fractional Strength, %
200                                     86
400                                     73
500                                     66
600                                     43
700                                     20
750                                     15
800                                     10

I use C for Centigrade, F for Fahrenheit, and do not use the degree symbol in this report.

The fires heated the atmosphere in the impact zone (a mixture of gases and smoke) to temperatures as high as 1100 C (2000 F). However, there was a wide variation of gas temperature with location and over time because of the migration of the fires toward new sources of fuel, a complicated and irregular interior geometry, and changes of ventilation over time (e.g., more windows breaking). Early after the impact, a floor might have some areas at habitable temperatures, and other areas as hot as the burning jet fuel, 1100 C. Later on, after the structure had absorbed heat, the gas temperature would vary over a narrower range, approximately 200 C to 700 C away from centers of active burning.

As can be seen from Table 2, steel loses half its strength when heated to about 570 C (1060 F), and nearly all once past 700 C (1300 F). Thus, the structure of the impact zone, with a temperature that varies between 200 C and 700 C near the time of collapse, will only have between 20% to 86% of its original strength at any location.

The steel frames of the WTC Towers were coated with “sprayed fire resistant materials” (SFRMs, or simply “thermal insulation”). A key finding of the NIST Investigation was that the thermal insulation coatings were applied unevenly — even missing in spots — during the construction of the buildings, and — fatally — that parts of the coatings were knocked off by the jolt of the airplane collisions.

Spraying the lumpy gummy insulation mixture evenly onto a web of structural steel, assuming it all dries properly and none is banged off while work proceeds at a gigantic construction site over the course of several years, is an unrealistic expectation. Perhaps this will change, as a “lesson learned” from the disaster. The fatal element in the WTC Towers story is that enough of the thermal insulation was banged off the steel frames by the airplane jolts to allow parts of frames to heat up to 700 C. I estimate the jolts at 136 times the force of gravity at WTC 1, and 204 at WTC 2.

The pivotal conclusion of the NIST fire safety investigation is perhaps best shown on page 32, in Chapter 3 of Volume 5G of the Final Report (NIST NCSTAR 1-5G WTC Investigation), which includes a graph from which I extracted the data in Table 2, and states the following two paragraphs. (The NIST authors use the phrase “critical temperature” for any value above about 570 C, when steel is below half strength.)

<><><>

“As the insulation thickness decreases from 1 1/8 in. to 1/2 in., the columns heat up quicker when subjected to a constant radiative flux. At 1/2 in. the column takes approximately 7,250 s (2 hours) to reach a critical temperature of 700 C with a gas temperature of 1,100 C. If the column is completely bare (no fireproofing) then its temperature increases very rapidly, and the critical temperature is reached within 350 s. For a bare column, the time to reach a critical temperature of 700 C ranges between 350 to 2,000 s.

“It is noted that the time to reach critical temperature for bare columns is less than the one hour period during which the buildings withstood intense fires. Core columns that have their fireproofing intact cannot reach a critical temperature of 600 C during the 1 or 1 1/2 hour period. (Note that WTC 1 collapsed in approximately 1 1/2 hour, while WTC 2 collapsed in approximately 1 hour). This implies that if the core columns played a role in the final collapse, some fireproofing damage would be required to result in thermal degradation of its strength.” (3)

<><><>

Collapse

Airplane impact sheared columns along one face and at the building’s core. Within minutes, the upper block had transferred a portion of its weight from central columns in the impact zone, across a lateral support at the building crown called the “hat truss,” and down onto the three intact outer faces. Over the course of the next 56 minutes (WTC 2) and 102 minutes (WTC 1) the fires in the impact zone would weaken the remaining central columns, and this steadily increased the downward force exerted on the intact faces. The heat-weakened frames of the floors sagged, and this bowed the exterior columns inward at the levels of the impact zone. Because of the asymmetry of the damage, one of the three intact faces took up much of the mounting load. Eventually, it buckled inward and the upper block fell. (1)

Now, let’s explore heat further.

How Big Were These Fires?

I will approximate the size of a level (1 story) in each of the WTC Towers as a volume of 16,080 m^3 with an area of 4020 m^2 and a height of 4 m (4). Table 3 shows several ways of describing the total thermal energy released by the fires.

Table 3, Magnitude of Thermal Energy in Equivalent Weight of TNT

ITEM                                  WTC 1              WTC 2
energy (Q)                          8000 GJ           3000 GJ
# levels                              8                       6
tons of TNT                       1912                 717
tons/level                           239                  120
lb/level                               478,000           239,000
kg/m^2 (impact floors)       54                    27
lb/ft^2 (impact floors)         11                    6

The fires in WTC 1 released an energy equal to that of an explosion of 1.9 kilotons of TNT; the energy equivalent for WTC 2 is 717 tons. Obviously, an explosion occurs in a fraction of a second while the fires lasted an hour or more, so the rates of energy release were vastly different. Even so, this comparison may sharpen the realization that these fires could weaken the framework of the buildings significantly.

How Hot Did The Buildings Become?

Let us pretend that the framework of the building is made of “ironcrete,” a fictitious mixture of 72% iron and 28% concrete. This framework takes up 5.4% of the volume of the building, the other 94.6% being air. We assume that everything else in the building is combustible or an inert material, and the combined mass and volume of these are insignificant compared to the mass and volume of ironcrete. I arrived at these numbers by estimating volumes and cross sectional areas of metal and concrete in walls and floors in the WTC Towers.

The space between floors is under 4 meters; and the floors include a layer of concrete about 1/10 meter thick. The building’s horizontal cross-section was a 63.4 meter square. Thus, the gap between floors was nearly 1/10 of the distance from the center of the building to its periphery. Heat radiated by fires was more likely to become trapped between floors, and stored within the concrete floor pans, than it was to radiate through the windows or be carried out through broken windows by the flow of heated air. We can estimate a temperature of the framework, assuming that all the heat became stored in it.

The amount of heat that can be stored in a given amount of matter is a property specific to each material, and is called heat capacity. The ironcrete mixture would have a volumetric heat capacity of Cv = 2.8*10^6 joules/(Centigrade*m^3); (* = multiply). In the real buildings, the large area of the concrete pads would absorb the heat from the fires and hold it, since concrete conducts heat very poorly. The effect is to bath the metal frame with heat as if it were in an oven or kiln. Ironcrete is my homogenization of materials to simplify this numerical example.

The quantity of heat energy Q absorbed within a volume V of material with a volumetric heat capacity Cv, whose temperature is raised by an amount dT (for “delta-T,” a temperature difference) is Q = Cv*V*dT. We can solve for dT. Here, V = (870 m^3)*(# levels); also dT(1) corresponds to WTC 1, and dT(2) corresponds to WTC 2.

dT(1) = (8 x 10^12)/[(2.8 x 10^6)*(870)*8] = 410 C,

dT(2) = (3 x 10^12)/[(2.8 x 10^6)*(870)*6] = 205 C.

Our simple model gives a reasonable estimate of an average frame temperature in the impact zone. The key parameter is Q (for each building). NIST spent considerable effort to arrive at the Q values shown in Table 3 (3). Our model gives a dT comparable to the NIST results because both calculations deposit the same energy into about the same amount of matter. Obviously, the NIST work accounts for all the details, which is necessary to arrive at temperatures and stresses that are specific to every location over the course of time. Our equation of heat balance Q = Cv*V*dT is an example of the conservation of energy, a fundamental principle of physics.

Well, Can The Heat Weaken The Steel Enough?

On this, one either believes or one doesn’t believe. Our simple example shows that the fires could heat the frames into the temperature range NIST calculates. It seems entirely reasonable that steel in areas of active and frequent burning would experience greater heating than the averages estimated here, so hotspots of 600 C to 700 C seem completely believable. Also, the data for WTC Towers steel strength at elevated temperatures is not in dispute. I believe NIST; answer: yes.

Let us follow time through a sequence of thermal events.

Fireball

The airplanes hurtling into the buildings with speeds of at least 200 m/s (450 mph) fragmented into exploding torrents of burning fuel, aluminum and plastic. Sparks generated from the airframe by metal fracture and impact friction ignited the mixture of fuel vapor and air. This explosion blew out windows and billowed burning fuel vapor and spray throughout the floors of the impact zone, and along the stairwells and elevator shafts at the center of the building; burning liquid fuel poured down the central shafts. Burning vapor, bulk liquid and droplets ignited most of what they splattered upon. The intense infrared radiation given off by the 1100 C (2000 F) flames quickly ignited nearby combustibles, such as paper and vinyl folders. Within a fraction of a second, the high pressure of the detonation wave had passed, and a rush of fresh air was sucked in through window openings and the impact gash, sliding along the tops of the floors toward the centers of intense burning.

Hot exhaust gases: carbon monoxide (CO), carbon dioxide (CO2), water vapor (H2O), soot (carbon particles), unburned hydrocarbons (combinations with C and H), oxides of nitrogen (NOx), and particles of pulverized solids vented up stairwells and elevator shafts, and formed thick hot layers underneath floors, heating them while slowly edging toward the openings along the building faces. Within minutes, the aviation fuel was largely burned off, and the oxygen in the impact zone depleted.

Thermal Storage

Fires raged throughout the impact zone in an irregular pattern dictated by the interplay of the blast wave with the distribution of matter. Some areas had intense heating (1100 C), while others might still be habitable (20 C). The pace of burning was regulated by the area available for venting the hot exhaust gases, and the area available for the entry of fresh air. Smoke was cleared from the impact gash by air entering as the cycle of flow was established. The fires were now fueled by the contents of the buildings.

Geometrically, the cement floors had large areas and were closely spaced. They intercepted most of the infrared radiation emitted in the voids between them, and they absorbed heat (by conduction) from the slowly moving (“ventilation limited”) layer of hot gases underneath each of them. Concrete conducts heat poorly, but can hold a great deal of it. The metal reinforcing bars within concrete, as well as the metal plate underneath the concrete pad of each WTC Towers floor structure, would tend to even out the temperature distribution gradually.

This process of “preheating the oven” would slowly raise the average temperature in the impact zone while narrowing the range of extremes in temperature. Within half an hour, heat had penetrated to the interior of the concrete, and the temperature everywhere in the impact zone was between 200 C and 700 C, away from sites of active burning.

Thermal Decomposition — “Cracking”

Fire moved through the impact zone by finding new sources of fuel, and burning at a rate limited by the ventilation, which changed over time.

Heat within the impact zone “cracks” plastic into a sequence of decreasingly volatile hydrocarbons, similar to the way heat separates out an array of hydrocarbon fuels in the refining of crude oil. As plastic absorbs heat and begins to decompose, it emits hydrocarbon vapors. These may flare if oxygen is available and their ignition temperatures are reached. Also, plumes of mixed hydrocarbon vapor and oxygen may detonate. So, a random series of small explosions might occur during the course of a large fire.

Plastics not designed for use in high temperature may resemble soft oily tar when heated to 400 C. The oil in turn might release vapors of ethane, ethylene, benzene and methane (there are many hydrocarbons) as the temperature climbs further. All these products might begin to burn as the cracking progresses, because oxygen is present and sources of ignition (hotspots, burning embers, infrared radiation) are nearby. Soot is the solid end result of the sequential volatilization and burning of hydrocarbons from plastic. Well over 90% of the thermal energy released in the WTC Towers came from burning the normal contents of the impact zones.

Hot Aluminum

Aluminum alloys melt at temperatures between 475 C and 640 C, and molten aluminum was observed pouring out of WTC 2 (5). Most of the aluminum in the impact zone was from the fragmented airframe; but many office machines and furniture items can have aluminum parts, as can moldings, fixtures, tubing and window frames. The temperatures in the WTC Towers fires were too low to vaporize aluminum; however, the forces of impact and explosion could have broken some of the aluminum into small granules and powder. Chemical reactions with hydrocarbon or water vapors might have occurred on the surfaces of freshly granulated hot aluminum.

The most likely product of aluminum burning is aluminum oxide (Al2O3, “alumina”). Because of the tight chemical bonding between the two aluminum atoms and three oxygen atoms in alumina, the compound is very stable and quite heat resistant, melting at 2054 C and boiling at about 3000 C. The affinity of aluminum for oxygen is such that with enough heat it can “burn” to alumina when combined with water, releasing hydrogen gas from the water,

2*Al + 3*H2O + heat -> Al2O3 + 3*H2.

Water is introduced into the impact zone through the severed plumbing at the building core, moisture from the outside air, and it is “cracked” out of the gypsum wall panels and to a lesser extent from concrete (the last two are both hydrated solids). Water poured on an aluminum fire can be “fuel to the flame.”

When a mixture of aluminum powder and iron oxide powder is ignited, it burns to iron and aluminum oxide,

Al + Fe2O3 + ignition -> Al2O3 + Fe.

This is thermite. The reaction produces a temperature that can melt steel (above 1500 C, 2800 F). The rate of burning is governed by the pace of heat diffusion from the hot reaction zone into the unheated powder mixture. Granules must absorb sufficient heat to arrive at the ignition temperature of the process. The ignition temperature of a quiescent powder of aluminum is 585 C. The ignition temperatures of a variety of dusts were found to be between 315 C and 900 C, by scientists developing solid rocket motors. Burning thermite is not an accelerating chain reaction (“explosion”), it is a “sparkler.” My favorite reference to thermite is in the early 1950s motion picture, “The Thing.”

Did patches of thermite form naturally, by chance, in the WTC Towers fires? Could there really have been small bits of melted steel in the debris as a result? Could there have been “thermite residues” on pieces of steel dug out of the debris months later? Maybe, but none of this leads to a conspiracy. If the post-mortem “thermite signature” suggested that a mass of thermite comparable to the quantities shown in Table 3 was involved, then further investigation would be reasonable. The first task of such an investigation would be to produce a “chemical kinetics” model of the oxidation of the fragmented aluminum airframe, in some degree of contact to the steel framing, in the hot atmosphere of hydrocarbon fires in the impact zone. Once Nature had been eliminated as a suspect, one could proceed to consider Human Malevolence.

Smoldering Rubble

Nature is endlessly creative. The deeper we explore, the more questions we come to realize.

Steel columns along a building face, heated to between 200 C and 700 C, were increasingly compressed and twisted into a sharpening bend. With increasing load and decreasing strength over the course of an hour or more, the material became unable to rebound elastically, had the load been released. The steel entered the range of plastic deformation, it could still be stretched through a bend, but like taffy it would take on a permanent set. Eventually, it snapped.

Months later, when this section of steel would be dug out of the rubble pile, would the breaks have the fluid look of a drawn out taffy, or perhaps “melted” steel now frozen in time? Or, would these be clean breaks, as edge glass fragments; or perhaps rough, granular breaks as through concrete?

The basements of the WTC Towers included car parks. After the buildings collapsed, it is possible that gasoline fires broke out, adding to the heat of the rubble. We can imagine many of the effects already described, to have occurred in hot pockets within the rubble pile. Water percolating down from that sprayed by the Fire Department might carry air down also, and act as an oxidizing agent.

The tight packing of the debris from the building, and the randomization of its materials would produce a haphazard and porous form of ironcrete aggregate: chunks of steel mixed with broken and pulverized concrete, with dust-, moisture-, and fume-filled gaps. Like a pyramid of barbecue briquettes, the high heat capacity and low thermal conductivity of the rubble pile would efficiently retain its heat.

Did small hunks of steel melt in rubble hot spots that had just the right mix of chemicals and heat? Probably unlikely, but certainly possible.

Pulverized concrete would include that from the impact zone, which may have had part of its water driven off by the heat. If so, such dust would be a desiccating substance (as is Portland cement prior to use; concrete is mixed sand, cement and water). Part of the chronic breathing disorders experienced by many people exposed to the atmosphere at the World Trade Center during and after 9/11/01 may be due to the inhalation of desiccating dust, now lodged in lung tissue.

Did the lingering hydrocarbon vapors and fumes from burning dissolve in water and create acid pools? Did the calcium-, silicon-, aluminum-, and magnesium-oxides of pulverized concrete form salts in pools of water? Did the sulfate from the gypsum wall panels also acidify standing water? Did acids work on metal surfaces over months, to alter their appearance?

In the enormity of each rubble pile, with its massive quantity of stored heat, many effects were possible in small quantities, given time to incubate. It is even possible that in some little puddle buried deep in the rubble, warmed for months in an oven-like enclosure of concrete rocks, bathed in an atmosphere of methane, carbon monoxide, carbon dioxide, and perhaps a touch of oxygen, that DNA was formed.

Endnotes

[1] MANUEL GARCIA, Jr., “The Physics of 9/11,” Nov. 28, 2006, [search in the Counterpunch archives of November, 2006 for this report and its two companions; one on the mechanics of building collapse, and the other an early and not-too-inaccurate speculative analysis of the fire-induced collapse of WTC 7.]

[2] “Executive Summary, Reconstruction of the Fires in the World Trade Center Towers,” NIST NCSTAR 1-5, , (28 September 2006). NIST = National Institute of Standards and Technology, NCSTAR = National Construction Safety Team Advisory Committee. https://www.nist.gov/topics/disaster-failure-studies/world-trade-center-disaster-study

[3] “Fire Structure Interface and Thermal Response of the World Trade Center Towers,” NIST NCSTAR1-5G, (draft supporting technical report G), http://wtc.nist.gov/pubs/NISTNCSTAR1-5GDraft.pdf, (28 September 2006), Chapter 3, page 32 (page 74 of 334 of the electronic PDF file).

[4] 1 m = 3.28 ft;    1 m^2 = 10.8 ft^2;    1 m^3 = 35.3 ft^3;    1 ft = 0.31 m;    1 ft^2 = 0.93 m^2;    1 ft^3 = 0.28 m^3.

[5] “National Institute of Standards and Technology (NIST) Federal Building and Fire Safety Investigation of the World Trade Center Disaster, Answers to Frequently Asked Questions,” (11 September 2006). https://www.nist.gov/topics/disaster-failure-studies/world-trade-center-disaster-study

<><><><><><><>

This article originally appeared as:

The Thermodynamics of 9/11
28 November 2006
https://www.counterpunch.org/2006/11/28/the-thermodynamics-of-9-11/

<><><><><><><>

Beam Me Up! (With Fossil Fuels?)

*******************************************

This article originally appeared as:

The Fossil Fuel Paradigm
25 October 2013
https://www.counterpunch.org/2013/10/25/the-fossil-fuel-paradigm/

*******************************************

“Beam me up, Scotty.” That phrase is as well known to science fiction aficionados as “Gort, Klaatu barada nikto.”

James Tiberius Kirk, the lead character and commanding officer in the futuristic space fantasy television series Star Trek (1966-1969) would call through his wireless communicator for his chief engineer Montgomery Scott to initiate the process of “energizing” him, to be instantly converted into pure energy, and propagated — “transported” — from a planetary surface or another spaceship back to Kirk’s own spaceship the Enterprise where he would be returned to his bodily form.

The popularity of the Star Trek series and its many sequels, spin-offs, imitations and entertaining derivatives all show how entrancing people find the idea of being able to pursue their private dramas with unlimited energy and unflagging power at their disposal, literally at the push of a button. And, one of the most attractive fantasies about having such power would be the ability to hop in a flash across great distances at a moment’s notice: the transporter.

Today as our fossil fuel diggers frack their way under the skin of Planet Earth with their noses pressed tight against the grindstone of profitability, and we burn up oil squeezed out of tar sands and coal hollowed out of mountains to keep up the high-powered freneticism of modern times, dismissing concerns about increasingly turbid choking cancerous air (as in Harbin, China) and global warming with its negative effects on the polar regions, on oceans and marine life, and on weather and climate, the longed-for science fiction fantasy of unlimited kilowatts and unlimited horsepower without undue environmental consequences can seem so cruelly distant. Why can’t we have that now? When will we get it?

In our (humanity’s) attachment to the fossil fuel paradigm, too many of us find it so much easier to imagine how we would employ unlimited push-button power for our expanding and instantaneous personal wants, instead of imagining how to fashion lives of timeless fulfillment liberated from fabricated desires, and expressed with elegant and graceful efficiency.

Given all that, I though it would be interesting to consider the physics problem of building a “beam me up” transporter. To start this speculative analysis, let us consider the energy and power needed to convert a 70 kilogram (154 pound) person into pure energy for electromagnetic transport.

First, a few words about notation:

The symbol x means multiply.

The symbol ^ means exponent (of ten).

The unit of mass is a kilogram, with symbol kg. 1 kg = 2.20462 pounds.

The unit of energy is a joule, with symbol J.

1 Exajoule = 10^18 joules = 1 EJ.

The unit of power is a watt, with symbol W.

1 joule/second = 1 J/s = 1 watt = 1 W.

1 Kilowatt = 1 kW = 10^3 W.

1 Terawatt = 1 TW = 10^12 W.

1 Exawatt = 1 EW = 10^18 W.

3,600,000 J = 1 kilowatt x 1 hour = 1 kWh.

Albert Einstein famously showed that mass (m) and energy (E) are two aspects of a single entity, mass-energy, and that the pure energy equivalent of a given mass is E = m x c^2, where c is the speed of light (c = 3 x 10^8 meters/second, in vacuum).

The physical universe is 13.8 billion years old (since the Big Bang) and presently has an extent (distance to the event horizon) of 1.3×10^23 kilometers. The total mass-energy in the universe can be stated as a mass equivalent of 4.4×10^52 kg, or an energy equivalent of 4×10^69 joules.

A 70 kg mass, whether a living person of just inert stuff, has a pure energy equivalent, by Einstein’s formula, of 6.3×10^18 joules (6.3 EJ). So, our desired transporter must supply at least 6.3 EJ to beam a 70 kg mass.

For comparison, the total US energy use in 2008 was 95.7 EJ, and the total world energy use in 2008 was 474 EJ. The combined pure energy equivalents of 15.2 people of 70 kg equals the total US energy use in 2008. Similarly, the combined mass-energy of 75.4 such people is equivalent to the world energy consumption that year.

Given that there are 3.15569×10^7 seconds in one year, we can calculate the average rate of energy use during 2008 (the power generated) in the U.S.A. as 3 TW, and in the world as 15 TW.

At the US power rate, it would take 24 days to convert one 70 kg individual or object into pure energy for transport if the entire national power output were devoted to this task. If the entire world were yoked to this purpose, it would take 4.9 days.

Aside from considerations of monopolizing national and world power consumption, the idea of “disassembling” a living person and converting them to pure energy over the course of one to three weeks seems unappealing long. How do we assure we don’t lose the life whose bodily form is being disassembled and dematerialized so slowly? The whole point of a transporter is to achieve near instantaneous relocation.

For the sake of simplicity we will continue a little bit further with the convenient assumption that a 70 kg transport, whether of a human being or a lump of lead, only requires 6.3 EJ. This implies 100% efficiency of mass conversion to energy, and that no extra energy is required to collect the information needed to materially reconstruct the individual or object on arrival, rather than just deliver a 70 kg puddle of gunk.

If this transporter were to accomplish the 70 kg conversion process in 24 hours exactly (86400 seconds), it would have a power rating of 6.3 EJ/day or 72.8 TW. This is a much higher power consumption than the US national average (3 TW). To operate such a transporter would require an energy storage system with a capacity of at least 6.3 EJ to feed the transporter (discharging over a 24 hour period), and which storage system would be charged up over a longer period prior to transport.

Obviously, if we could build transporters of increased power, the conversion would occur in less time. Thus, a transporter that could convert the 70 kg traveler to pure energy within one hour would operate at 1,747 TW (and draw power from the storage bank at that rate). A 1 minute transport conversion would require 104,846 TW. A 5 second transport converter would require 1,258,157 TW (1.26 EW). For any of these machines, it would take 24 days of total US power generation to store up the energy required for one transport, or almost 5 days of total world power generation.

The power generated on Planet Earth, in reality not science fiction, is just not enough for a transporter. Why not use the power of the Sun?

The Sun’s luminosity is 384.6×10^6 EW. If totally harnessed, it would take the Sun 16.4 nanoseconds to supply the 6.3 EJ needed for our 70 kg transport converter. A 5 second (1.26 EW) transport converter could be powered from only 3.3 billionths of the Sun’s luminosity.

The solar mean distance to Earth is 1.496×10^8 km, which is used as a convenient unit of distance in descriptions of the Solar System, and known as 1 AU (one astronomical unit).

A disc 34,224 km in diameter at 1 AU would capture the 3.3 billionths of the Sun’s luminosity needed for our 5 second transport converter. That solar collection disc (assumed 100% efficient) would be 2.7 times larger in diameter than the Earth. Since we wouldn’t want to give up our sunshine by using Planet Earth as a solar collector (for the transporter), nor risk shadowing Planet Earth with an oversized collection disc in nearby outer space, it would seem best to have the entire collector and transporter system away at a distance comparable to the Moon. Travelers and cargo from Planet Earth scheduled for deep space transport would first have to shuttle to their embarkation point on the Moon by relatively sedate rocket technology.

Let us return to the question of the extra energy required to collect the information needed to materially reconstruct an individual or object on arrival after beaming. The immense amount of information about the molecular, atomic and sub-atomic bonds and their many dynamic structural arrangements that in total make up the biophysical self of a particular individual will necessarily require a huge investment of energy to ascertain and code electronically.

One can see that such vital information about the actual relationships between particle and cellular forms of matter, which actually form a specific living organism, has an equivalent mass-energy being the sum of the energy required to program the information and then convert that program into transmissible electromagnetic waves. Because a human being is much more complex than the sum of his or her elemental and chemical composition, it is possible that the information mass-energy of a human being will outweigh their bulk mass-energy. Hence, the transport of a 70 kg person that only accounts for the 70 kg of bulk mass will undoubtedly deliver a dead blob of stuff unlikely to even duplicate the original chemical composition. To deliver the same living person, who happens to posses a particular physicality of 70 kg bulk mass, will require much more energy, a vast overhead to account for the great subtlety of living biochemical reality and consciousness. So, perhaps our 70 kg transporter will be able to deliver 70 kg of water, or a 70 kg salt crystal or slab of iron, but only safely transport a much simpler living organism like a small plant or an insect.

Actually, it is only the fully detailed structural code of the individual that would be essential for dematerialized transport. We imagine that such a code would have to be determined by disassembling the materiality of the individual (or object), by “energizing” them. It is then only necessary to transmit the code, not the now destroyed physical materiality converted into pure energy. Otherwise, if such unique structural codes could be determined nondestructively, then the transporter system would advance into being a duplicating system, a 3D cloning printer.

On arrival, the electromagnetic message that is the coded person or object being transported can be rematerialized from energy stored at the destination. Otherwise, the electromagnetic forms of both the structural code and the bulk materiality of the person or object would have to be transmitted, and the materialization at the destination would involve reading the code to use it as a guide in reconverting the beamed-in energy back into the original structured bulk mass.

Other problems for transporter system designers, which we will not explore here, include conversion efficiencies, distortion and loss of signal during propagation, and transport through through solid material.

It seems that we will be earthbound without transporters for quite some time.

Oh, that this too, too sullied flesh would melt,

Thaw, and resolve itself into a dew,

Or that the Everlasting had not fixed

His canon ‘gainst self-slaughter! O God, God!

How weary, stale, flat, and unprofitable

Seem to me all the uses of this world!

Fie on ’t, ah fie! ‘Tis an unweeded garden

That grows to seed. Things rank and gross in nature

Possess it merely. That it should come to this.

Today’s reality may seem so primitive, constricted and decayed in comparison to the fantasy worlds of Star Trek, unbounded by physical science, but perhaps the liberation of the spirit so many imagine through science fiction can be experienced here by having the right attitude rather than just wanting unlimited power.

<><><><><><><>

Climate and Carbon, Consensus and Contention

Climate and Carbon, Consensus and Contention

*******************************************************************

This article originally appeared as:

Climate and Carbon, Consensus and Contention
4 June 2017
https://dissidentvoice.org/2007/06/climate-and-carbon-consensus-and-contention/

*******************************************************************

1. Introduction

Is the world heating up because of a build-up of carbon dioxide (CO2) in the atmosphere? If so, does human activity — like burning fossil fuels — produce enough CO2 to be a decisive factor, or is the process largely natural? Would such global warming be a good thing for humanity and life on Earth, or a danger? Can science give us an accurate measure of the amount of heating per unit of CO2 emission? Does such a process continue monotonically and indefinitely, or does it change character by accelerating wildly — a nonlinear or chaotic behavior — beyond a certain concentration of CO2 in the atmosphere? Can nonlinear and chaotic behavior lead to a completely new climate, like an Ice Age? How quickly can such changes take place? How soon will we know all the answers? How much control will we have over our destinies? How will the world politics of global warming play out, and how can I be a winner in that game?

This article will describe some of the technical considerations that go into making a climate model, and in this way give some context to the many claims and counterclaims made about global warming. As with any phenomenon that has the potential of changing the status quo of human socio-political and financial arrangements, there are many self-interest factions who each have a stake in the molding of public opinion on the topic. Unraveling the truth from the propaganda begins by listing the fundamental scientific considerations needed in order to understand the linked and complex phenomena we call climate.

1. Introduction
2. A historical analogy with the birth of modern physics
3. How greenhouse gases hold heat
4. Water vapor and anthropogenic greenhouse gases
5. A note about ozone
6. How climate models work
-> 6.1 Models and links
-> 6.2 Space and time, scales and resolution
7. Solar Heat Into The Geartrain Of Climate
8. Justifying the IPCC consensus
9. Criticizing the IPCC consensus
10. The Open Cycle Closes
Endnotes

2. A Historical Analogy with the Birth of Modern Physics

Climate research in 2007 may be at a similar point of development as physics research was in 1907, poised for revolution.

Albert Einstein (1879-1955) found that the mechanics of Isaac Newton (1642-1727) was only a low speed, low mass limit of “general relativity,” a reality where space, time and gravity are linked, as are mass and energy.

During these same years, Max Planck (1858-1938) introduced his “quantum theory,” which was soon expanded by Einstein and Neils Bohr (1885-1962). Quantum theory revolutionized the 19th century view of electromagnetics, so elegantly stated by Michael Faraday (1791-1867), James Clerk Maxwell (1831-1879), and other scientists of their time and before (e.g., Coulomb, Ampère, Biot, Savart, Hertz). The “old” electromagnetics assumed that a “luminiferous aether” existed in otherwise empty space, and it was the oscillations of this massless “material,” which manifested electromagnetic waves, and as a result all known electrical effects. This idea was a logical extension of the observation that mechanical waves in solids (e.g., elastic waves, earthquakes) and fluids (e.g., water waves, sound waves) were the motion of vibrations through matter.

The great difficulty of 19th century experimental physicists was that they could never devise any experiment to actually detect the luminiferous aether, despite the obvious reality of electrical effects and the many motors, generators, radios and other devices built by Nikola Tesla (1856-1943), Thomas Edison (1847-1931) and other electrical engineers. An experiment to detect the aether (in 1887), by Albert Michelson (1852-1931) and Edward Morley (1838-1923), was famous for establishing that the speed of light in a vacuum was a constant (299,792,458 meters per second, a standard value adopted in 1983) regardless of any motion by the measuring device itself (Einstein’s interpretation). Another paradox was that light could exhibit a wave-like nature, as when it refracted (bent) on passing through a glass-air or water-air boundary, and when it diffracted (separated by color) on passing through a prism or narrow slit; and light could also exhibit a particle-like nature in its very precise and selective initiation of luminescent or electron (charged particle) emission from atoms.

Einstein and the quantum theorists resolved the paradoxes of electromagnetism with the quantum theory. It stated that the luminiferous aether did not exist (thus agreeing with all experiments) and that the seeming contradiction of light (and all electromagnetic radiation) having both a wave and particle nature simultaneously was in fact true. The “wavelength” of a particle or “quantum” of light was exactly proportional to its energy content as given by Planck’s formula, E = h×c/wavelength, where h is Planck’s constant, and c is the speed of light in a vacuum. Despite the seeming oddness of ascribing a wavelength to a single particle (quantum), this model of electromagnetic radiation has proved to be consistent with all measurements. Light has both a wave and particle nature, a fact exploited in electrical, communications, optical and photo-electronic technology.

Now, consider the analogy to climate research today. A consensus has developed, and is voiced by the United Nations Intergovernmental Panel on Climate Change (UN IPCC), that the accumulation of CO2 in the Earth’s atmosphere does cause an accumulation of heat in the atmosphere and biosphere of the Earth. Furthermore, human activity, primarily the burning of fossil hydrocarbon fuels, is a significant cause of this CO2 accumulation. This case has not yet been definitively proved, but the majority of scientists and their professional organizations have reached the conclusion that this case passes the test of being true beyond a reasonable doubt. They see an improving agreement between the many complicated and highly regarded (for theoretical rigor and predictive abilities) numerical (computational) models of climate, and the growing body of paleo-, historical, and current climate data.

The vastness of this entangled problem makes it impossible to know and calculate every conceivable detail “exactly,” so there are many scientist critics of the IPCC consensus. Exceptional scientists and many others of equivalent learning and capability to the consensus scientists are among the critics. However, they appear to be in the minority of scientific opinion on the issue of CO2 and climate change.

We can ask, are the climate change critics of today like the relativity and quantum theory revolutionists of 1900, their ideas not yet expressed compellingly enough to overturn a highly developed consensus view like luminiferous aether, which was orthodoxy taught in the universities by the teachers of Einstein and his generation? If so, then the “real story” has yet to emerge and revolutionize thinking on climate change.

The other possibility is that the revolution in understanding climate change has already begun, being the IPCC consensus, which will be borne out as more data is gathered, bigger computers are used and models of superior refinement are devised. Are the critics resistant to adopting a still fairly nebulous new idea, and to abandon the certainties of their long-standing views — like luminiferous aether a century ago — and the technical doubts they have about the new models, doubts which some can articulate with great logic and precision?

Science will march along and in time we will know the answers. However, our social and political problem is that if the IPCC consensus is correct (and, worse yet, if it is conservative) then we have little time to do anything about the predicted negative consequences of CO2 accumulation in the atmosphere.

3. How Greenhouse Gases Hold Heat

The significant greenhouse gases are water vapor (H2O, 36-70%), carbon dioxide (CO2, 9-26%), methane (CH4, 4-9%), ozone (O3, 3-7%), nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, perfluorocarbons and chlorofluorocarbons. The chemical symbol and the percentage contribution to the greenhouse effect on Earth by that species appears in parentheses for the first four gases.1

Sunlight that penetrates the atmosphere and is absorbed by the lands and oceans of the Earth warms its surface. In turn, the Earth’s surface radiates heat in the form of infrared radiation up into the atmosphere. Greenhouse gases absorb and retain this heat, and this effect is due to their molecular nature.

Many types of molecules will develop a slight electrical charge imbalance when their heavy nuclei rotate and vibrate relative to each other as seen along the directions of their chemical bonds. These charged oscillations can have frequencies and energies that match those of a quantum of infrared radiation. So, such molecules readily absorb incident infrared photons (“particles” of infrared electromagnetic energy), and they apply the added energy to boost themselves into a higher state of rotational and vibrational excitation. Basically, molecules store heat “internally” by fidgeting (like little children who would rather be running around than sitting at a dinner table or in a church pew). Gases made up of isolated atoms, like helium, neon and argon, cannot store heat internally (by rotation and vibration about a chemical bond); their response to being heated is to move more quickly, and this is called kinetic energy, an “external” form of energy, which adds to the aggregate effect of an increase in pressure and temperature in a volume of gas.

Nitrogen (N2) and oxygen (O2), the major gas species in Earth’s atmosphere, do not develop a significant charge imbalance when they rotate and vibrate, because of the symmetry of their chemical structure (one end of the “dumbbell” never looks more nor less positive that the other). Molecules of this type do not absorb nor emit (very much) infrared radiation. Molecules with more chemical bonds, and nuclei from several chemical elements will have more heat storage capacity, a good example being the CFCs, chlorofluorocarbons, highly volatile fluids devised as refrigerants.

Molecules with stored heat (internal energy) can transmit this energy to other molecules and atoms by colliding with them. Such “inelastic collisions” can de-excite the rotation and vibration of molecules while boosting the speed of other molecules and atoms. In this way the internal energy of greenhouse gas molecules can contribute to the kinetic energy of atmospheric particles: the sensible heat of the atmosphere.

It is interesting to note that the air about you has 2.7×1025 particles/meter3, spaced by an average distance of 3.3×10-9 meters; and that each air molecule collides 1010 times/second, with an average travel between collisions of 6×10-8 meters. These numbers characterize sea-level air.

4. Water Vapor and Anthropogenic Greenhouse Gases

Nature supplies all the water vapor in the atmosphere, and much of the carbon dioxide, methane and ozone. Human activity supplies all of the very high heat capacity volatile organic compounds (VOCs). Obviously, a VOC gas whose molecules can each hold ten to one hundred times the internal energy of a CO2 molecule will be as effective as ten to one hundred times the VOC quantity of CO2. Even with this leverage, the quantities of H2O, CO2, CH4 and O3 in the atmosphere are large enough to dominate the effect of heat retention (this does not justify emitting more VOCs). So, the emission of CO2 by human activity is our most effective contribution to atmospheric heat retention.

As CO2 accumulates, the atmosphere warms, more water is evaporated, which adds heat retention capability to the atmosphere and increases warming, a positive feedback loop. A mitigating effect is the formation of clouds from the water vapor, which has a cooling effect by reflecting sunlight. Heat retention capability is called “heat capacity” in the study of thermodynamics. The effect of CO2 emission is not merely to add its own heat capacity to the atmosphere, but to act as an agent causing a further increase in the dominant component of atmospheric heat capacity, water vapor. Humans have no control over the water cycle, but they can have some control over the emission of CO2.

Today, there are nearly 380 ppm (particles per million) of CO2 in the atmosphere, whereas prior to 1800 (for about 10,000 years) there was usually about 280 ppm. The total emission of carbon from burning is 6.5 GT/y (giga-tons/year, for giga = 109, tons = metric tons of 1000 kg); of this total, 4 GT/y enters the atmosphere. Individual molecules of CO2 remain in the atmosphere for several years before being taken up by biological systems or absorbed by the oceans. However, because of the many sources and sinks of CO2 (e.g., outgassing from warming seas, like a ginger ale going flat on a hot summer day) the average concentration of atmospheric CO2 will take between 200 years to 450 years to equilibrate (level out) in response to any small perturbation (increase or decrease) of its concentration. So, if all burning by human activity (anthropogenic sources) were to stop today, it might take hundreds of years for the CO2 concentration to reach an equilibrium; it would probably rise for a time, peak, then equilibrate to a steady level below the peak concentration.

5. A Note about Ozone

Ozone (O3) absorbs ultraviolet light, which is dangerous to human skin and many living things. In filtering this higher-energy component of sunlight, upper atmospheric ozone performs a valuable service for us. CFCs destroy ozone by oxidizing, they strip off an oxygen atom leaving O2. CFCs are regulated by the Montreal Protocol, to address the problem of the degradation of the upper atmospheric UV shield.

Lower atmospheric (tropospheric) ozone is produced by chemical reactions that involve auto exhaust and pollution gases. Ozone is corrosive, it damages lungs, brittles plastics and fades painted surfaces (e.g., automobiles; poetic justice?), and corrodes the stone faces of many ancient monuments. Tropospheric ozone is the species considered a greenhouse gas.

6. How Climate Models Work

6.1 Models and Links

“A climate model is a computer based version of the Earth system, which represents physical laws and chemical interactions in the best possible way. We include the sub-systems of the Earth system, which is gained from investigations in the laboratory and measurements in reality. A global model is composed of data derived from the results of models simulating parts of the Earth system (like the carbon cycle or models of atmospheric chemistry) or, if possible with the available computer capacity, the models are directly coupled. The functionality of the models is tested by comparing simulations of the past climate with measured data we already have.”2

The energy of the Sun drives the Earth’s weather and climate. We will follow this energy as it falls through the atmosphere, warming the land and the oceans, to turn over the many interlocking cycles that produce the phenomena of climate. First, consider these major subsystems of climate, and the links between them.

The atmosphere will be represented by two models, one physical (M_Atmos_phys), one chemical (M_Atmos_chem). The physics model of the atmosphere will apply mechanics and thermodynamics to account for the temperature distribution, the generation of wind, the formation of clouds, as well as the vertical variation of properties on account of gravity. The chemical model of the atmosphere will produce the concentration of species, which results from the many chemical reactions possible at any elevation, given the local temperature and density of the atmosphere.

The oceans are represented by a model (M_Ocean) that links salinity and temperature to local current, and this current conveys heat (e.g., the Gulf Stream).

The biosphere may be modeled (M_Bio) as a series of sources and sinks of gases (O2, CO2), fluids (H2O), other substances (waste production, deforestation) and heat, which interacts with the oceans (M_Ocean) and atmosphere (M_Atmos_phys and M_Atmos_chem).

The carbon cycle can be singled out as a separate model (M_CO2) acting in parallel to the biosphere model.

Links between the ocean model and the atmospheric physics model would include the force of wind on the ocean, the cycle of evaporation and precipitation, and the cycles of (infrared) radiation and heat flow (by convection) between air and water.

It is understood that the physics models of the air and oceans include the effects of the Earth’s rotation. A schematic of the global model might be as follows (M = model, L = link, directions of influence can be > [right], < [left ] or <> [2 way], see footnote 2 for a picture),

[M_Atmos_chem]<<[M_Bio]>>[M_Ocean].
[M_Atmos_chem]<>[M_Atmos_phys]‹L_heat>[M_Ocean].
[M_Atmos_chem]<>[M_Atmos_phys]>L_wind>[M_Ocean].
[M_Atmos_chem]<>[M_Atmos_phys]‹L_rain>[M_Ocean].
[M_Atmos_chem]<>[M_CO2]<>[M_Ocean].

One can imagine many refinements to this basic climate model. The first is obviously to include a land surface model, and link it to the atmosphere and oceans. The land surface model could be further elaborated by including dynamic aspects of vegetation (perhaps there would be overlap with the biosphere model). Another refinement is to account for the many particulates (e.g., dust, salt, droplets) in air, an aerosol model. Aerosols can scatter and absorb light (producing the “blue” of the sky), capture gas molecules on their surfaces and act as catalysts to certain chemical reactions, and they have a major impact on the formation of clouds. The injection of sulfate aerosols into the atmosphere by large volcanic eruptions has cooled the planet and affected weather globally for a time (e.g., for 5 years after the Krakatoa eruption of 1883). Given that aerosols rain out into the oceans, one could add an ocean chemistry model (especially if considering ocean sequestration of CO2 as an active scheme; this would acidify the oceans and kill a variety of marine life). Another refinement would be to include a sea-ice model (heat flow at the ocean-air interface, light reflection) with links to the ocean and atmosphere models.

6.2 Space and Time, Scales and Resolution

The limitation to model complexity is not human imagination, nor any limit placed by the inventory of known facts about natural processes; it is the finite capacity of computing machines. Computer models of the oceans and the atmosphere will be calculations performed on a three dimensional wire-mesh representation (grid) of the space taken by the air and water. Such grids may include an enormous quantity of points and yet have very coarse resolution. Typical atmosphere models have a 250 km horizontal resolution and 1 km vertical resolution; they may have 20 horizontal (spherical shell) layers in the first 30 km of elevation (90 percent of the atmosphere is below 16 km, 99.99997 percent is below 100 km). Ocean models can have 125 km to 250 km horizontal resolution and 200 m to 400 m depth resolution (ocean depth can be as much as 10,000 meters).

“Small scale physical processes which are below the size of the grid cells cannot be explicitly resolved. Their net impact on the coarse scale processes is estimated and included into the model by parameterization. In the atmosphere this is in particular the case for cloud formation, in the ocean for small scale eddies and for convection processes.”2

Climate models are supposed to predict general conditions many years in the future (and reproduce the record of the past). So, they calculate across “big” cells of space and “long” steps of time. They “average over” small spatial effects and those of short duration, what we would experience as local weather and day-night cycles. It is easy to see that the daily oscillations of temperature during a “hot” July we recall from our past do not diminish our memories of having lived through a continuing “hot spell.” Climate models aim to predict these seasonal, even monthly averages, rather than reproduce (or predict) the filigrees of day-to-day weather variations about the mean conditions.

But, don’t small scale and short time effects have some impact on the bigger picture of climate? For example, doesn’t the formation and dispersal of clouds, though brief localized phenomena, affect climate in that they can effectively block sunlight, so that over many stormy seasons and places they might have significantly reduced the solar heating of the planet? Yes, which is why such effects are estimated, and these estimates are included in climate models as “parameters,” or, as affectionately know to all scientists, “fudge factors.” A fudge factor might be a table or formula derived from data or other work, which pairs a given property, say percentage cloud cover, to a quantity of the model, say relative humidity (percentage of water vapor in the air). A fudge factor might be elaborate (e.g., a separate computer subroutine, evaluated at every space and time step) or very elementary (e.g., a single and constant value for the needed factor, arbitrarily specified by the programmer for each run of the program).

The task of any climate model scientist is to improve the spatial and temporal accuracy of the model (finer grids, bigger computers), and to eliminate as many parameters (fudge factors) as possible by replacing them with self-consistent physics and chemistry models (mathematical abstractions of the actual processes). Like any crutch, fudge factors are only a problem when we remain wedded to them instead of trying to build up our strength (knowledge) so as to eliminate them from our activity. The immensity of the problem at hand, and the reality of any person’s finite resources means that some of these fudge factors will remain in use for quite some time. Recall that fudge factors show a recognition of considerations that one does not wish to ignore even though they may be difficult to handle. I imagine that these tasks make up most of the day-to-day, nitty-gritty work of climate modeling research.

7. Solar Heat into the Geartrain of Climate

The Sun, our star, has its own cycles of behavior (e.g., sunspots with an irregular cycle of about 11 years), which have been carefully studied and are now monitored by satellites. The quantity and spectrum of solar radiation arriving at the Earth at any given time (insolation) is known. Variations of solar radiation are relatively small, and for most purposes the output of the Sun can be taken as constant. The “solar constant” (1340 watts/meter2) is defined as the solar energy falling per unit time at normal incidence on a unit area of the Earth’s surface (ignoring the atmosphere). At any moment, Earth is intercepting 1.7×1017 watts, or 170 million gigawatts of solar power.

The motion of the Earth has several cycles whose collective effect influences changes in climate; these are Milankovitch cycles (Milutin Milankovitch, 1879-1958). One is a 100,000 year “ice age” cycle, which coincides with the periods of glaciation during the last few million years, the Quaternary Period. Milankovitch cycles are the net effect of three periodicities, those of eccentricity, axial tilt and precession. The eccentricity of the Earth’s orbit around the Sun is the “ovalness” of that circuit. The axial tilt of the Earth’s rotational axis (~north-south axis) is the angle between the plane of rotation (~the plane of the equator) and the orbital plane (the plane of the Earth’s orbit about the Sun). The precession is the wobble of the Earth’s axis (like the wobble of a spinning top). Milankovitch cycles are a major factor in climate change, but they do not explain everything about past climate (for which there is data).3

The ultraviolet portion of the solar flux begins interacting with the tenuous and ionized upper fringes of the atmosphere (from 50 km to 1000 km), before most of it is absorbed in the ozone layer (25 km) at the threshold to the bulk of Earth’s atmosphere. The visible light streams through a generally transparent atmosphere, except where it is reflected and scattered by clouds and aerosols. Visible light eventually strikes land or water, being absorbed, or it strikes ice and snow and is largely reflected. Solar energy absorbed into the Earth warms its surface, down to a depth of perhaps 100 meters, to an average (equilibrium) temperature of 15° C (59° F). Of course, at the immediate surface (down to at most 10 meters) the temperature is set by the latitude, season and local weather. Below, say 1 km, the heat produced by the Earth’s gravitational compression of its core becomes evident, and temperature increases with depth.

The surface of the Earth (-60° C to 50° C) radiates infrared photons of about 10-20 Joules of energy, with frequencies in the range of 15,000 GHz, and wavelengths in the range of 20 micrometers (microns). As already described, greenhouse gases can absorb these photons and add heat to the atmosphere.

The absorbed solar energy powers many cycles. In the oceans, the flow of heat involves currents that include changes of salinity and density (and thus of depth). The thermohaline cycle is a complex “conveyor belt” of salt and heat linking all the world’s oceans. In general, ocean currents transport heat absorbed in tropical latitudes up (and, in the Southern Hemisphere, down) to higher latitudes. For example, Ireland, Scotland, Wales and England experience warmer climate than is usual at their latitudes, comparable to those of Hudson Bay, Newfoundland, the Kamchatka Peninsula, the Bering Sea and the Aleutian Islands. Western Europe is warmed by the Gulf Stream, which emanates from the Caribbean Sea. Here, heat and evaporation produce a warm, salty and buoyant surface current that sweeps north along the Eastern Seaboard of the United States, cooling in the North Atlantic, becoming denser, freshening by mixing with glacial melt south of Greenland, and then sinking to the ocean floor to continue in a circuitous path that has it bobbing up in tropical latitudes and sinking in polar ones. One theory about the effects of global warming holds that the melting of Greenland’s ice cap will dump so much fresh water into the North Atlantic that the thermohaline current will become so fresh (free of salt) and buoyant (less dense) that it will no longer sink there, thus stopping the convection of tropical heat to colder latitudes (the actual stopping of the massive momentum of this worldwide current might take decades to a century). Without such warming, the poles would once again ice over, and these ice caps could easily extend to mid latitudes, cooling the Earth into a new Ice Age.

The heat absorbed by the atmosphere, combined with the forces imparted to it by the rotation of the Earth, will produce patterns of circulation and a distribution of temperature that will change in response to the Milankovitch cycles, as well as alterations to atmospheric chemistry introduced by human activity. The 36 percent increase in atmospheric CO2 from 280 ppm to 380 ppm represents the addition of 217 gigatons (metric tons) of carbon over the last two centuries, most of it during the last 50 years. The weight of suspended carbon has increased from the pre-industrial amount of 607 gigatons to 824 gigatons today.

For completeness, we note that the incidence of any low probability natural catastrophe, like the fall of a massive comet, or a caldera eruption (an extremely large volcanic eruption) could radically alter climate (and might be fun to model).

It is easy to see that there are many, many uncertainties, approximations, and links that any particular subsystem model relies on, and which in turn affect the accuracy and reliability of any global climate model. So, there is more than enough material for critics to point to as serious deficiencies. Where the criticisms are knowledgeable and specific, they will direct the efforts of climate modelers to refine their synthesis. Breakthroughs will come from scientists who put their minds to understanding why certain disagreements between climate models and reality persist. Whether such breakthroughs will put the final polish on the models, or utterly destroy them by giving birth to new conceptions, I cannot say.

8. Justifying the IPCC Consensus

The IPCC Fourth Assessment Report (2007) concluded that “Most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” The report defines “very likely” as a probability greater than 90% that more than 50% of the observed warming is attributable to human activity.4 This statement represents the consensus of the scientific community.5

From a scientific point of view, the IPCC is a nightmare. From a government and corporate (sadly, the same) point of view, the IPCC is a useful bureaucracy that dampens the “alarmist” potentialities of unfiltered scientific findings being broadcast to the public. From the public’s perspective, the net result may be an acceptably reliable source of sobering information that gently understates the possibilities.6

The IPCC was established in 1988 by two U.N. organizations, the World Meteorological Organization (WMO) and the United Nations Development Programme (UNDP). The purpose of this panel is to evaluate the human impact on climate. The members of the panel are representatives appointed by governments, and they include scientists as well as others concerned with socio-economic (e.g., development) and policy issues. Besides an upper management and administration layer, the panel operates as three Working Groups (WG) assessing: I, the scientific research on climate; II, the vulnerability of socio-economic and natural systems; and, III, options (policies) for limiting greenhouse gas emission, and otherwise countering the potential hazards.

The “report” from the IPCC is actually in three volumes, one from each working group. The IPCC does not conduct any climate research itself, its scientists evaluate the peer-reviewed scientific literature, and their consensus on the state-of-the-art is then further smoothed into summary reports by the process of “committee authorship.” The WGI volume of the IPCC Technical Assessment Report (TAR) would be the essential scientific (as in math, physics, chemistry) report.

Any particular technical conclusion by WGI might represent a consensus of many individual scientific efforts, perhaps hundreds of published papers by thousands of scientists. For example, the attribution to anthropogenic CO2 emission for the global warming above what would be expected from natural causes relies, in part, on the observation that climate models that include natural causes of warming and anthropogenic sources of greenhouse gases reproduce the data on global temperature rise (within a reasonable error band), while climate models that only have natural causes of warming do not reproduce this temperature history.4

It appears that the variety of choices made about their parameters (fudge factors, like for cloud cover) by the many climate modelers who were sampled were not the decisive factors in determining the average temperature rise. The process of peer-reviewed publication ensured that all the works sampled by the IPCC met good technical standards. So, the IPCC is making technical conclusions based on the overall trend of scientific findings, the “state of the art.”

The IPCC’s emphasis on technical conservatism is paid for by the deliberate (perhaps slow?) pace of publishing its findings. The recent observation of methane outgassing from melting tundras — a potentially huge new source of a high heat capacity gas — is not included in the latest IPCC report. The measured trends of global warming (e.g. temperatures and sea level changes) are always at the top of the ranges of predictions published by the IPCC.7

The IPCC is led by government scientists, and most of the panelists and authors are also scientists. The “political” people in the IPCC can just as easily be scientists who manage a more than purely scientific group process, which has multiple political sponsors under the UN umbrella. Clearly, scientists who distinguish themselves in the field of climate research can be invited and appointed to the panel. However, they can also be removed when their government’s key corporate sponsors find them too “alarming.” This was the case in the replacement of Robert Watson as IPCC chairman by Rajendra K. Pachauri in 2002. ExxonMobil had beseeched the Bush Administration to lobby the IPCC for this change.6

Any IPCC scientist will have both compelling and restraining motivations. Their original passion for science, the interest and excitement of the work, will drive them to uncover as much of the mechanisms of climate as they can, and to tell others about their findings and the implications to human society. When their results are accepted and adopted by other scientists in their field, their esteem rises, and they become invested in maintaining their technical reputations. These two motivations, one personal the other social, combine to push scientists into becoming advocates for their fields. However, successful government scientists are supremely political creatures who have mastered the art of extracting money from political structures to fund their activities. They understand the value (to their careers) of packaging the message for sponsor consumption; so the asperity of the raw and knotty truth emerging from science’s workbenches must be slipped into the most svelte form possible that preserves the facts. It is easy to see how these forces of personal psychology will find an equilibrium that matches the institutional character of the IPCC, a measured and deliberate style and a thorough technical conservatism (all scientists except the mad ones and the geniuses are terrified of ever being wrong). Politics slows and dampens the message from the IPCC, but it does not quash it.

9. Criticizing the IPCC Consensus

I am always happy to be in the minority. Concerning the climate models, I know enough of the details to be sure that they are unreliable. They are full of fudge factors that are fitted to the existing climate, so the models more or less agree with observed data. But there is no reason to believe that the same fudge factors would give the right behavior in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.
— Freeman Dyson, 20078

The bad news is that the climate models on which so much effort is expended are unreliable because they still use fudge factors rather than physics to represent important things like evaporation and convection, clouds and rainfall. Besides the prevalence of fudge factors, the latest and biggest climate models have other defects that make them unreliable. With one exception, they do not predict the existence of El Niño. Since El Niño is a major feature of observed climate, any model that fails to predict it is clearly deficient. The bad news does not mean that climate models are worthless. They are, as Manabe said thirty years ago, essential tools for understanding climate. They are not yet adequate tools for predicting climate.
— Freeman Dyson, 19999

That portion of the scientific community that attributes climate warming to CO2 relies on the hypothesis that increasing CO2, which is in fact a minor greenhouse gas, triggers a much larger water vapor response to warm the atmosphere. This mechanism has never been tested scientifically beyond mathematical models that predict extensive warming, and are confounded by the complexity of cloud formation — which has a cooling effect…. We know that [the sun] was responsible for climate change in the past, and so is clearly going to play the lead role in present and future climate change. And interestingly… solar activity has recently begun a downward cycle.
— Ian Clark, 200410

Our team… has discovered that the relatively few cosmic rays that reach sea-level play a big part in the everyday weather. They help to make low-level clouds, which largely regulate the Earth’s surface temperature. During the 20th Century the influx of cosmic rays decreased and the resulting reduction in cloudiness allowed the world to warm up. …most of the warming during the 20th Century can be explained by a reduction in low cloud cover.”
— Henrik Svensmark, 199710

I’m not saying the warming doesn’t cause problems, obviously it does. Obviously we should be trying to understand it. I’m saying that the problems are being greatly exaggerated. They take away money and attention from other problems that are much more urgent and important. Poverty, infectious diseases, public education and public health. Not to mention the preservation of living creatures on land and in the ocean.
— Freeman Dyson, 20059

This sampling of criticism of the IPCC consensus captures much of the substance of the opposition. Freeman Dyson, an extraordinary scientist, creative thinker and popular author, accurately focuses on the weakest technical elements in the entire CO2 climate computer calculation construction: fudge factors and coarse resolution (and, elsewhere, on the CO2-water vapor connection). Ian Clark, a hydrogeologist and professor at the University of Ottawa, succinctly states the doubts about the connection between CO2 and water vapor, and voices a belief in the controlling role of solar variability combined with Milankovitch cycles. Henrik Svensmark, an astrophysicist at the Danish National Space Center, describes a specific mechanism claimed to control the formation of low-level clouds and which is moderated by solar variability, hence a completely alternate theory of global warming (and climate) as a completely natural process. Finally, Dyson voices a sentiment common to the opposition critics that the failings they point to are so grave or unlikely to be overcome that the funding for climate modeling work should be drastically reduced.

Dyson’s point on fudge factors is that they stand in for physics that is missing (e.g., a detailed model of evaporation from the sea, condensation in the air, and precipitation; to arrive at a dynamic and spatially resolved reflectivity of the atmosphere: clouds), and they are arbitrarily adjusted to make the calculations agree with present trends. Once a set of “good” fudge factors is arrived at by matching the data, then the code is run far into the future to predict climate. However, this procedure relies on the unjustified assumption that the operation of the physics behind any fudge factor in that hypothetical future world is exactly like the operation of that physics today, even if those future conditions are very different. How do we know that the evaporation-precipitation cycle of that future time will result in exactly the same cloud cover fudge factor as occurs today? If the composition of the atmosphere (gases and aerosols) is very different, this would not be the case. The only reliable course is to actually put in the physics of the processes covered over by fudge factors, and allow them to be calculated in a self-consistent way with the evolving conditions. This criticism is so clear and correct, that one can only presume it is being addressed directly by cloud research and advances in climate modeling. Perhaps in a few years this will be solved; and it is even possible that the fudge factors won’t be that different.

Dyson’s other point is that models of greater resolution in space and time, which reproduce localized and transient phenomena like El Niño (a periodic warming in the mid Pacific Ocean, which is big compared to cell size), will boost the credibility of futuristic predictions. One can only assume that whatever features allowed one group to predict El Niño, at the time Dyson made his comments, have been studied, duplicated and elaborated upon by others since. Again, Dyson’s critique points to what should be (and I assume is) a major focus of climate modeling efforts.

Ian Clark asks for experimental verification of the theoretical CO2-water vapor link; the idea of CO2 capturing infrared energy, heating the atmosphere, which allows more water to evaporate and itself contribute to infrared absorption, thus forming an atmospheric heating positive feedback loop. As he notes, calculations of the effect readily support the hypothesis.

Experimental proof would have to be found in either observations in the natural world, or small scale experiments in a laboratory. Perhaps a comparison of observations of cloud formation and regional air temperature changes over heavily industrialized and urban areas — expected to emit significant CO2 — and remote unpopulated areas might show what effect, if any, excess CO2 has on local humidity and heating, or cloudiness and cooling. I can imagine such measurements being performed from fixed weather stations, ships, airplanes and satellites carrying infrared sensing instruments (heat sensing), radars (aerosol, droplets, cloud probing) and particle sampling filters (aerosols, dust, salt). Again, I imagine cloud physics experimental scientists, following in the footsteps of Vincent J. Schaefer (1906-1993), Bernard Vonnegut (1915-1997) and Duncan C. Blanchard, among others, are actively working to measure the reality of the situation. Another avenue would be to build a laboratory cloud chamber (a chamber with an air space above liquid water, and external controls over volume and pressure), introduce CO2, irradiate it with an infrared laser (e.g., CO2 laser) to selectively heat the CO2, and then measure the heating of the “air” (probably just N2) by inelastic collisions with CO2, and also the change in water vapor concentration. I would be happy to conduct this experiment if given a few million dollars and a plum academic appointment.

Recent findings from the study of ice cores shows that at certain times in the past the average temperature began rising hundreds of years before the increases in CO2 concentration. Some critics point to this as proving that solar heating alone controls climate change, and the rise in CO2 is a result of outgassing from warming seas and thawing tundras. This last effect is certainly true and happening today, but the occasional lag of past CO2 increases with temperature does not prove that the reverse cannot happen. Both the data and basic physics principles support the conclusion that the presence of CO2 amplifies warming initiated by any factor. At certain times in the past, solar-orbital (solar variability and Milankovitch cycle) effects initiated a warming phase, which caused CO2 to bubble out of warming seas and thawing tundras — a lagging effect — that amplified the warming, the further evaporation of water, and so on. Today, the artificial injection of CO2 into the atmosphere has added to its heat capacity and boosted whatever warming might have been occurring from strictly natural causes — a leading effect.4

A criticism often hurled back at critics is “well, what’s your explanation?” If the IPCC consensus is wrong about climate change, then what causes it? Henrik Svensmark provides one answer. His claim is that cosmic rays dominate the formation of tropospheric clouds, and the variability of the cosmic ray flux directly influences the variability of the Earth’s cloud cover, and as a result its solar heating, and ultimately its climate fluctuations.

Cosmic rays are very high energy photons and charged particles produced by some combination of nuclear reactions and powerful electromagnetic accelerating effects in deep outer space. The high energy of these rays makes them extremely penetrating, some pass through the diameter of the Earth without change. However, they do occasionally collide with atomic and molecular matter, and this causes a breakup scattering numerous particles (e.g., atomic ions, electrons) from the site of the collision. These collision fragments are detected in laboratories in cloud chambers. As these fragments whisk through the humid (supersaturated) atmosphere in the cloud chamber, they collide with molecules, initiating the formation of droplets, and the trail of each fragment shows as a string of droplets that can be photographed, recording the event. Svensmark’s claim is that cosmic rays that manage to interact near sea-level initiate the beginnings of cloud formation, a process called nucleation. Cloud physics scientists usually assume (and measure) that condensation nuclei are present in the form of salt particles, dust (soil, soot, pollen, microbes) and ice crystals.

Svensmark then describes how the variability of the Solar Wind (a flux of charged particles from the Sun) affects the distribution of magnetism in space around the Earth (well known physics), and how the solar-driven fluctuations of the extent of the Earth’s “magnetic shield” will allow more or less of the cosmic rays to penetrate to the surface. Magnetic fields deflect charged particles (like those inside the atoms of a piece of metal you bring close to a magnet), and conversely a large flux of charged particles can bend or distort a magnetic field. When the emission of Solar Wind is weak and the Earth’s magnetic field is extended further out into space, then a greater portion of the cosmic ray flux is deflected away; a strong Solar Wind compresses the Earth’s magnetic field, and cosmic rays find an easier approach. So, ultimately, the variations of the Solar Wind and of the unknown sources of cosmic rays manifest as variations of tropospheric cloud cover, which in combination with Milankovitch cycles set the heating and climate of the Earth — according to the theory.

Svensmark’s model has a great deal of good and interesting physics, but to establish it as fact will require a tremendous amount of quantification. It appeals to those who prefer an explanation of global warming that does not implicate industrialized society. One questionable assumption in this theory is that cosmic ray interactions dominate cloud formation, for if they do not, then the rest of the theory is unnecessary. Cloud physics is an old and sophisticated discipline, and the observations about the role of aerosols in nucleation and condensation cannot be so easily dismissed. Svensmark’s mechanism may actually occur, but at an insignificant level. Perhaps new data will bring new insights.11

Finally, we allow Freeman Dyson to sum up the sense of many critics, that climate modeling research is overfunded. Professional science is a feeding frenzy, being almost entirely a captive of government and corporate funding. The competing sales pitches of various groups and factions in science can reach such levels of hyperbole, and sometimes mendacity, that knowing onlookers become disgusted. It may well be that some climate research people are sounding the alarm of imminent doom in order to get the munificent attention of sponsors, a technique that has proved successful for the military-industrial complex. Some scientific critics of climate modeling may be people who resent their few scraps from the feeding frenzy, jealousy is not unknown among science folk. Other science critics may be allowing their ideological inclinations to overly influence their scientific judgments as regards climate modeling, again, scientists are human and they can sometimes allow their emotions to cloud their thinking. Such people are more likely to use words like “hoax” and “myth.” Criticisms that have technical substance are valuable, whatever the critic’s judgment as to the ultimate value of climate modeling work. The best response is to improve the work.

10. The Open Cycle Closes

It is so hard to give up a comforting fantasy. The shock, denial and anger expressed about global warming is really a psychological resistance to the loss of the pleasurable illusion of the “open cycle.” There is no escape from the 2nd Law of Thermodynamics, and there is no such thing as an “open system,” even though today’s obsessed consumers, and the corporate overlordship prefer to imagine otherwise. Thermodynamically and materially, we live in a fishbowl world, there is no possibility of ejecting waste from our tails and never again swimming through the consequences.

We have enjoyed many false open cycles: disposable bottles and packaging, disposable combustion engine exhaust gases, disposable chemicals and nuclear waste, disposable inner cities, disposable under-educated and under-employed populations, disposable foreign peasants encumbering resource extraction, and private profit at public expense.

The “use” we get out of any item has to be compared to the resource and energy “cost” of producing it from its raw materials, and then of absorbing it back into the processes that produce that energy and those raw materials. When we take responsibility for the impact of the entire cycle, then we are motivated to choose products (and “services”) with the highest ratios of use to cost.

As the expanding impact of global warming cracks through the filters on consciousness of more people, there will be an increasing competition to escape and profit from the consequences. One obvious example of this is the nuclear power industry’s enthusiastic adoption of the fearfulness of global warming, “we are the solution” they say. The profit motive is shameless.12

Environmentalists of Luddite persuasions will urge a repentant return to a de-industrialized, agrarian style of life. The military-industrial complex will see the possibilities of “getting into the green” with sales of “green” high technology to the equally messianic capitalist elite, revolted at the idea of sliding “backward” into Third World experience, hence thrusting “forward as to war” to save “our way of life.” Photovoltaics, engineered materials and solid-state micro-electronics are impressive and capable technologies, but they cannot be produced in the quantities and at the costs needed to meet the energy needs of the Third World.13

I think the best response to global warming is to greet it as the next challenge to human development — it certainly presents delectable problems to be solved by any engineer and thermodynamicist interested to devise machines and structures that convert sunlight to electricity. It is time to move beyond our dependency on the burning of paleontologic leavings. It is time to ride the wave of heat washing over the Earth from the Sun. We would leave behind many outmoded technologies, political economies, behaviors and ideas, in making this change. There is nothing “dooming” humanity with the approach of global warming, except the mental inertia that seeks to preserve our petty ignorance, prejudices and greed. The laws of physics present no barrier, and economics is always an artificial construction, which we could choose to configure for the benefit of everybody.

Consider this: solar power at 1 percent conversion efficiency on 2 percent of the land area of the USA would produce the total national electrical energy use of 4×1012 kilowatt-hours/year. That is 13,400 kWh/y for each of nearly 300 million people.

Imagine if the expense, manpower and energy that has been put into the Iraq War since 2003 had been put into solar thermal plants (up to 5 percent efficient), solar updraft towers, mountain and offshore wind (instead of oil) derricks, and residential-scale solar, wind (vortex tube) and co-generation (use of “waste” heat from water heaters) electrical generators. Imagine if we seriously tried to electrify our transportation systems and made all such networks, from the neighborhood buses and trolleys to the transcontinental rail service, as free (and quickly available) to use as sidewalks and staircases; who would drive to sit in traffic jams?

At this point we have gone beyond WGI (the science of global warming), to the topics covered in WGIII (policies in response to global warming), a good place to stop. My own conclusion is that the best response to global warming would be a fundamental change in the nature of human society. Logically, there is no requirement that human society change, but then there is also no requirement that it prosper or even survive.

Acknowledgments: Thanks to Jean Bricmont and Roger Logan for interesting questions.

(web sites active on 4-5 May 2007)

  1. Greenhouse Gas
  2. How Does A Climate Model Work?
  3. Milankovitch cycles
  4. Attribution of Recent Climate Change
  5. Scientific Opinion on Climate Change
  6. Intergovernmental Panel on Climate Change
  7. Arctic Sea Ice Melting Much Faster Than Expected
  8. More on Freeman Dyson
  9. Freeman Dyson
  10. Scientists Opposing the Mainstream Scientific Assessment of Global Warming
  11. Vincent J. Shaefer and John Day, A Field Guide To The Atmosphere (The Peterson Field Guide Series), Houghton Mifflin Company, Boston, 1981. Louis J. Battan, Cloud Physics and Cloud Seeding Anchor/Doubleday, 1962. Duncan C. Blanchard, From Raindrops To Volcanoes Anchor/Doubleday, 1967.
  12. “Mirage And Oasis — Energy Choices In An Age Of Global Warming,” New Economics Foundation (NEF), June 2005, ISBN-1-904882-01-3. UN Facing a Backlash on Emissions Action Plan
  13. The Energy Challenge For Achieving The Millennium Development Goals,” UN-Energy, 22 July 2005. “Energizing The Millennium Development Goals, A Guide To Energy’s Role In Reducing Poverty,” United Nations Development Programme (UNDP), August 2005. “Energy For The Poor: Underpinning The Millennium Development Goals,” Department For International Development, Government of the United Kingdom, August 2002, ISBN-1-86192-490-9. E. F. Schumacher, Small Is Beautiful, Economics As If People Mattered (Blond & Briggs, Ltd., London; Harper & Row Publishers, Inc., 1973).

<><><><><><><>

Climate Change Denial Is Murder

Climate change denial by government is murder by weather.

By now everyone everywhere knows that climate change is a reality, especially the deniers who are simply lying to cover up their real intent, which is to continue with their capitalist schemes of self-aggrandisement even to the point of knowingly letting people die as a consequence.

During the last two weeks, Hurricanes Harvey, Irma and José, in succession, have formed in the tropical Atlantic Ocean to sweep northwest through the Caribbean toward the southern coasts of North America. Harvey has flooded hundreds of thousands of dwellings in the Gulf Coast area of Texas around Houston. Irma, the “lawnmower from the sky,” and the strongest Category 5 (out of 5) hurricane ever recorded, is just making landfall in Florida after razing a number of the smaller Caribbean islands; and Hurricane José is now sweeping into the Caribbean Sea from the east. Climate change denier and right-wing propagandist Rush Limbaugh, lounging in his Florida Xanadu, had called the official weather forecasts of Hurricane Irma’s path “fake news,” but has just heeded those same forecasts by evacuating from the storm, as well as from personal responsibility.

Climate change (as global warming) doesn’t “cause” hurricanes, it makes them more powerful and more frequent. Warmer oceans more easily evaporate, increasing the atmospheric moisture available for rain, and increasing the atmospheric heat energy available for driving winds. It takes heat to evaporate liquid water into vapor. Such vapor rising from the ocean surface mixes with the atmosphere. At higher elevations where the air temperature is lower, or in the presence of cold air currents, water vapor can lose its heat energy to the air and condense into droplets of liquid water. The heat energy released by water vapor to condense back into liquid – the latent heat of vaporization – is sizable (per unit mass of H2O) and adds to the energy of motion of the air molecules and air currents: wind. So, global warming makes for more moisture in the air over tropical ocean waters, and more heat energy in that air to drive winds and storms.

The scientific facts about global warming have been known for a very long time, and were largely learned through government-funded research. US Government officials, as in the George W. Bush administration and now in the Donald Trump administration, who publicly deny these facts – excruciatingly documented and warehoused by the scientific, technical, military and commercial agencies of the US Government – are simply voicing bald-faced lies, and are thus betraying their official and constitutional responsibilities to the American public. Since this lying (and its enabling of continued greenhouse gas pollution) is done knowingly and for monetary gain, and the consequential more violent weather (droughts, hurricanes, floods) erupting from today’s global warming climate change always causes fatalities, then that climate change denial is at the very minimum an accessory to criminally negligent manslaughter, and without a reasonable doubt to premeditated murder.

Outline History of Awareness of Climate Change

What follows is a timeline, which I first made for myself in 2013, of the development of scientific knowledge about climate change. This summary outline includes some of the incidents of the intimately related “world energy crisis,” which I define as getting enough energy for a decent standard of living worldwide, coupled with the commercial competition between: fossil fuel energy versus nuclear energy versus solar/green energy.

Both fossil fuel energy and nuclear energy are intrinsically capitalist forms of resource hoarding and market exploitation, because they are extracted from the Earth at specific locations, burned to generate electricity at large and complex industrial plants, and distributed widely and distantly through a large electrical transmission line distribution grid.

On the other hand, solar/green energy is intrinsically a socialist or public commons type of energy resource because it is naturally abundant everywhere – like sunshine and wind – and is easily converted to electricity wherever it is collected. It is because of its intrinsic socialist (anti-capitalist) nature that solar and green energy are being legally attacked and restricted in US political jurisdictions controlled by rabidly capitalist special interests. The outline now follows.

The clock for a public policy response to the “energy crisis” (now enlarged to “Global Warming” and “Climate Change”) started ticking in October 1973 with the First Arab Oil Embargo (1973 Oil Crisis), and we’ve yet to get off our asses in response to the alarm (40+ years later).

Four years later, the energy problem was serious enough for President Jimmy Carter to address the nation about it on the 202nd anniversary of Paul Revere’s ride (18 April 1977). See http://www.youtube.com/watch?v=-tPePpMxJaA

Peak Oil was the fear in 1977, not Global Warming, even though science had been certain about Global Warming since 1955-1957.

What follows is a very brief synopsis of the scientific development of knowledge about Anthropogenic Global Warming (AGW, which is human-caused, CO2-driven Climate Change), along with incidents of the parallel World Energy Crisis.

Atmospheric Carbon Dioxide is a gaseous insulator and high capacity heat-storage medium. It can retain much more heat energy per unit mass than the two dominate atmospheric gases making up 99.03% of the atmosphere: diatomic nitrogen (N2, 78.08% of the air), and diatomic oxygen (O2, 20.95% of the air). The remaining 0.97% of the dry atmosphere is a mixture of rare gases (with low heat capacity) and organic vapors (with high heat capacity), which include the high heat capacity species: methane (CH4) and carbon dioxide (CO2). The water vapor (H2O) carried along by the otherwise dry air is also a high heat capacity medium.

Quotes below are noted as from one of:
(HCCS): http://en.wikipedia.org/wiki/History_of_climate_change_science
(HS): http://www.eoearth.org/view/article/156308/

(JEA): John E. Allen, Aerodynamics, Hutchinson & Co. LTD, London, 1963.

In 1896 Svante Arrhenius calculated the effect of doubling atmospheric carbon dioxide to be an increase in surface temperatures of 5-6 degrees Celsius. Meanwhile, another Swedish scientist, Arvid Högbom, had been attempting to quantify natural sources of emissions of CO2 for purposes of understanding the global carbon cycle. Högbom found that estimated carbon production from industrial sources in the 1890s (mainly coal burning) was comparable with the natural sources. (HCCS)

In 1938 a British engineer, Guy Stewart Callendar, attempted to revive Arrhenius’s greenhouse-effect theory. Callendar presented evidence that both temperature and the CO2 level in the atmosphere had been rising over the past half-century, and he argued that newer spectroscopic measurements showed that the gas was effective in absorbing infrared [heat radiation] in the atmosphere. Nevertheless, most scientific opinion continued to dispute or ignore the theory. (HCCS)

In 1955 Hans Suess’s carbon-14 isotope analysis showed that CO2 released from fossil fuels was not immediately absorbed by the ocean. (HCCS)

In 1957, better understanding of ocean chemistry led Roger Revelle to a realization that the ocean surface layer had limited ability to absorb carbon dioxide. (HCCS)

In a seminal paper published in 1957 [Roger Revelle and Hans Suess, “Carbon dioxide exchange between atmosphere and ocean and the question of an increase of atmospheric CO2 during the past decades.” Tellus 9, 18-27 (1957)], Roger Revelle and Hans Suess argued that humankind was performing “a great geophysical experiment,” [and called] on the scientific community to monitor changes in the carbon dioxide content of waters and the atmosphere, as well as production rates of plants and animals. (HS)

AGW became common knowledge among aerodynamicists and atmospheric scientists by the 1960s, as witnessed by the following passage from John E. Allen’s 1963 book surveying the field of aerodynamics “for the non-specialist, the young student, the scholar leaving school and seeking an interest for his life’s work, and for the intelligent member of the public.”

Scientists are interested in the long-term effects on our atmosphere from the combustion of coal, oil and petrol and the generation of carbon dioxide. It has been estimated that 360,000 million tons of CO2 have been added to the atmosphere by man’s burning of fossil fuels, increasing the concentration by 13%. This progressive rise in the CO2 content of the air has influenced the heat balance between the sun, air and oceans, thus leading to small but definite changes in surface temperature. At Uppsala in Sweden, for example, the mean temperature has risen 2° in 60 years. (JEA)

22 April 1970: On this first Earth Day, MG,Jr decides to aim for a career in energy research, for a brave new future.

October 1973 – March 1974: The first Arab Oil Embargo (formally known as the 1973 Oil Crisis) erupts in the aftermath of the Yom Kippur War (1973 Arab-Israeli War, October 6–25, 1973).

Evidence for warming accumulated. By 1975, Manabe and Wetherald had developed a three-dimensional Global Climate Model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model’s atmosphere gave a roughly 2°C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased. (HCCS)

18 April 1977: President Jimmy Carter’s Address to the Nation on Energy.

The 1979 World Climate Conference of the World Meteorological Organization concluded “it appears plausible that an increased amount of carbon dioxide in the atmosphere can contribute to a gradual warming of the lower atmosphere, especially at higher latitudes….It is possible that some effects on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century.” (HCCS)

1979-1980: The 1979 (or Second) Oil Crisis erupts from the turmoil of the Iranian Revolution, and the outbreak of the Iran-Iraq War in 1980.

March 28, 1979: A nuclear reactor meltdown occurs at the Three Mile Island power station in Pennsylvania.

July 15, 1979: President Jimmy Carter addresses the nation on its “crisis of confidence” during its 1979 energy crisis (oil and gasoline shortages and high prices). This address would become known as the “malaise speech,” though Carter never mentioned “malaise.” See http://www.youtube.com/watch?v=kakFDUeoJKM. Have you seen as honest an American presidential speech since? “Energy will be the immediate test of our ability to unite this nation.”

November 4, 1980: Ronald Reagan is elected president and the “big plunge” (the neoliberal shredding of the 1945 postwar social contract) begins. Poof went all my illusions about an American energy revolution.

April 26, 1986: A nuclear reactor at the Chernobyl power station in the Ukraine explodes, spewing radioactivity far and wide, and the fuel core melts down. The Chernobyl disaster was the worst nuclear power plant accident until the Fukushima Daiichi nuclear disaster of March 11, 2011.

1986: Ronald Reagan has the solar hot water system removed, which had been installed on the roof of the White House during the Carter Administration. The official US energy policy was obvious to me: solar energy and conservation were dead.

In June 1988, James E. Hansen [in Congressional testimony] made one of the first assessments that human-caused warming had already measurably affected global climate. Shortly after, a “World Conference on the Changing Atmosphere: Implications for Global Security” gathered hundreds of scientists and others in Toronto. They concluded that the changes in the atmosphere due to human pollution “represent a major threat to international security and are already having harmful consequences over many parts of the globe,” and declared that by 2005 the world should push its emissions some 20% below the 1988 level. (HCCS)

All that AGW scientific research has done since 1988 has been to add more decimal places to the numbers characterizing the physical effects. That was over a quarter century ago. So, I take it as a given that the American and even World consensus [so far] is in favor of probable human extinction sooner (by waste heat triggered climate change) rather than later (by expansion of the Sun into a Red Giant star). And, yes, the course of the extinction will proceed inequitably. Not what I want, but what I see as the logical consequences of what is. (End of the outline.)

Global warming is Earth’s fever from its infection with capitalism.

So, whenever some government, corporate or media potentate discharges another toxic cloud of climate change denialism, realize that what they are actually and dishonestly telling you is: “I am going to keep making my financial killing regardless, and I don’t care who has to die for it.”

<><><><><><><>

Also appearing at:

Climate Change Denial Is Murder
8 September 2017
https://dissidentvoice.org/2017/09/climate-change-denial-is-murder/

<><><><><><><>

Added on 11 September 2017:

<><><><><><><>

My Mind’s Ramble in Science

Ferrari P4 (2004)

(Above: 13, 17, 24, 28)

1972 US GP: Ferrari F1 engine (3 liter, flat 12 cylinder).

(Above: 14, 18, 19, 22, 28)

1972 US GP: Ferrari F1: Car 7 = Jacky Ickx (5th), Car 8 = Clay Regazzoni (8th), Car 9 = Mario Andretti (6th).

(Above: 13, 14, 17, 18, 19, 28)

P-51 Mustang (EMG photo, 1992)

(Above: 01, 14, 15, 16, 18, 19, 24, 28)

Spitefire Mk. XVIe (1987)

(Above: 01, 14, 15, 16, 18, 19, 24, 28)

Supersonic Jacob’s Ladder – Static

(Above: 19, 20, 21, 22, 23, 24, 25, 28, 29, 30, 31, 32, 33)

Supersonic Jacob’s Ladder – Flow

(Above: 19, 20, 21, 22, 23, 24, 25, 28, 29, 30, 31, 32, 33, 35, 40, 42)

Imagine a 1 nanosecond snapshot of a nuclear explosion.

(Above: 26, 28, 30, 31, 34, 35, 36, 37, 38, 39)

Sunflare Blue Sky Clouds

(Above: 27, 28, 40, 41, 42, 43, 44, 45)

Longwood Gardens Greenhouse

(Above: 27, 28, 44, 45)

My Mind’s Ramble in Science (1952-2007):

01. Airplanes
02. Tinker Toys
03. Godzilla
04. Rodan
05. Invaders From Mars
06. The Day The Earth Stood Still
07. Forbidden Planet
08. Tom Swift, Jr.
09. Nuclear Power
10. Submarines
11. Bicycles
12. Skateboards
13. Race Cars
14. Piston Engines
15. WW2 Aircraft
16. Supercharged Piston Engines
17. Race Car design
18. Piston Engine design
19. Engineering
20. Mathematics
21. Computer programing
22. Thermodynamics
23. Fluid Mechanics
24. Aerodynamics
25. Supersonic Flow
26. Fusion Energy
27. Solar Energy
28. Photography
29. Gas Physics
30. Plasma Physics
31. Ionized Flow
32. Molecular Physics
33. Gas Lasers
34. Nuclear Explosion Radiation
35. Electrical Physics
36. Nuclear Explosion Electric Generators
37. Magnetohydrodynamics
38. Solar Physics
39. Cosmic Plasma
40. Lightning
41. Atmospheric Physics
42. De-NOx chemical physics
43. Global Warming chemical physics
44. Solar thermal-to-electric generators
45. Publicly Owned National Solar Electric System

<><><><><><><>

Climate Change, Life, Green Energy

(You can download the above JPEG image, for easy reference.)

>>> Earth will survive Climate Change, humanity may not. <<<

<><><><><><><><><><><><><>
<> MG,Jr. on Climate Change  <>
<><><><><><><><><><><><><>

In response to questions like: How do we know? See:
Climate and Carbon, Consensus and Contention
4 June 2007
http://www.dissidentvoice.org/2007/06/climate-and-carbon-consensus-and-contention/

In response to questions like: How do we know? See “Addendum” (at bottom of):
How Dangerous is Climate Change?, How Much Time Do We Have?
5 December 2015
https://manuelgarciajr.com/2015/12/05/how-dangerous-is-climate-change-how-much-time-do-we-have/

In response to questions like: Is it even a major threat? See:
How Dangerous is Climate Change?, How Much Time Do We Have?
5 December 2015
https://manuelgarciajr.com/2015/12/05/how-dangerous-is-climate-change-how-much-time-do-we-have/

In response to questions like: Exactly how do we cause global warming? See:
Closing the Cycle: Energy and Climate Change
25 January 2014
https://manuelgarciajr.com/2014/01/25/closing-the-cycle-energy-and-climate-change/

<><><><><><><><><><><><><><><><><><>
Life, From the Big Bang to the Climate Change Era:
Outline History of Life and Human Evolution
29 January 2017
https://manuelgarciajr.com/2017/01/29/outline-history-of-life-and-human-evolution/

<><><><><><><><><><><><><><>
<>  MG,Jr. on Renewable Energy <>
<><><><><><><><><><><><><><>

Of all the articles I have ever written, the one I most wish had gotten wide attention and actually affected public thinking and action, is linked below.
Energy for Society in Balance with Nature
8 June 2015
https://manuelgarciajr.com/2015/06/08/energy-for-society-in-balance-with-nature/

Renewable Energy (and war and peace):
Green Energy versus The Uncivil War
18 April 2017
https://manuelgarciajr.com/2017/04/18/green-energy-versus-the-uncivil-war/

<><><><><><><><><><><><><><><><><>