Your Genetic Presence Through Time

The propagation through time of your personal genetic presence within the genetic sea of humanity can be visualized as a wave that arises out of the pre-conscious past before your birth, moves through the streaming present of your conscious life, and dissipates into the post-conscious future after your death.

You are a pre-conscious genetic concentration drawn out of the genetic diffusion of your ancestors. If you have children who survive you then your conscious life is the time of increase of your genetic presence within the living population. Since your progeny are unlikely to reproduce exponentially, as viruses and bacteria do, your post-conscious genetic presence is only a diffusion to insignificance within the genetic sea of humanity.

During your conscious life, you develop a historical awareness of your pre-conscious past, with a personal interest that fades with receding generations. Also during your conscious life, you can develop a projective concern about your post-conscious future, with a personal interest that fades with succeeding generations and with increasing predictive uncertainty.

Your conscious present is the sum of: your immediate conscious awareness, your reflections on your prior conscious life, your historical awareness of your pre-conscious past, and your concerns about your post-conscious future.

Your time of conscious present becomes increasingly remote in the historical awareness of your succeeding generations.

Your loneliness in old age is just your sensed awareness of your genetic diffusion into the living population of your conscious present and post-conscious future.

To present the above ideas in a simple quantitative way, consider a model human population in which:

— every individual lives 75 years,

— at age 25, every individual mates and produces 2 children.

In this model, reproductive mating is assumed to produce 2 children so as to maintain a stable population by adding one replacement each for the mother and father (who only have one reproductive mating per lifetime; but any number of non-reproductive matings are allowed).

So, 175 years prior to the birth of a model individual here, 128 ancestors (ggggg-grandparents) are born. The genetic concentration leading to the target model individual proceeds forward with the birth of 64 gggg-grandparents 150 years prior to the birth of the target individual; 32 ggg-grandparents 125 years prior; 16 gg-grandparents 100 years prior; 8 great-grandparents 75 years prior; 4 grandparents 50 years prior; and 2 parents 25 years prior.

During conscious life the target individual has 2 children when at age 25, acquires 4 grandchildren when at age 50, and acquires 8 great-grandchildren when at age 75, when he/she dies. The number of progeny increases during the post-conscious future of the target individual, with a diminishing portion of the target individual’s genes in each descendant as their generation number increases.

You can see from the Table that you would have very little genetic connection with ancestors older than your great-grandparents (earlier than generation -3, or 75 years before “your” birth, in the model above), and thus (usually) a diminished interest in family history before that time.

Your most closely related other individual(s) is(are) your brother or sister (a twin in the model), with whom you share 100% of your genetic sources: 50% from your mother, and 50% from your father, for each of you, though your father’s mix may be different between the siblings, as well as the mother’s mix being different between the siblings. Identical twins would have identical paternal mixes, and identical maternal mixes.

You can see that for progeny beyond the +3 generation, your great-great-grandchildren, your genetic contribution is minor, and so your concerns about such distant future progeny (beyond 25 years after your death) is usually diminished.

So, the 175 year interval of human history that you (as a model individual, as above) would most likely have the greatest personal interest in would include the 75 years prior to your birth (your ancestors’ histories), the 75 years of your model lifetime (your conscious life), and the first 25 years of your post-conscious future (during times of conscious living for your children, grandchildren and great-grandchildren).

In summary: You are genetically concentrated from the pre-conscious past, genetically prominent in the conscious present, and genetically diffused into the post-conscious future.


ADDENDUM, 15 June 2018

One can formulate a normalized genetic presence (NGP) parameter as follows (which I describe as it is applied to the specific population model used earlier):

(1) For your pre-conscious time, at each generation divide your potential genetic presence (which is equal to 1) by the number of ancestors (carriers) born at that generation. This will be a fraction, and we call it your potential genetic presence because it occurs prior to your live birth.

(2) For your conscious life time, at each generation form the sum of (a) + (b):

(a) your living genetic presence, which is defined as the ratio of your total genetic complement divided by the number of organisms carrying them. This number is 1/1 = 1 while you are alive, and it is zero after you die.

(b)  the sum of your transmitted genes, normalized by the number of your LIVING progeny, as follows:

— 2 children are each 50% carriers of your genes, thus there are 2 organisms carrying a total of 1 genetic presence of you (jumbled, of course), thus: 1/2 = 0.5, (from your 25th to 100th year, in the model), PLUS

— 4 grandchildren who are each 25% carriers of your genes, so there are 4 organisms carrying another 1 genetic presence of you (jumbled, of course), thus 1/4 = 0.25 (from your 50th to 125th year, in the model), PLUS

— 8 great-grandchildren who are each 12.5% carriers of your genes, so there are 8 organisms carrying another 1 genetic presence of you (jumbled, of course), thus 1/8 = 0.125 (for a brief time during your 75th year, as “you” die then in the model; they continue to your 150th year),

(3) For your post-conscious time, your NGP equals the transmitted genetic presence carried by your living progeny:

— after “your” (model) death, you continue to calculate transmitted genetic presence factors for advancing generations of your LIVING progeny by a similar logic to the previous steps.

The following TABLE 2, and graph, show the NGP over time of a target individual in the model used previously.

The Atlantic Overturning Current Is Slowing


The Atlantic Overturning Current Is Slowing

The Atlantic Overturning Current is part of a worldwide twisted loop of ocean water, called the thermohaline cycle (thermo = heat, haline = salt), which emerges very salty and warm out of the Gulf of Mexico, travels north as a surface current along the east coast of North America, veers east in the North Atlantic toward Europe, then loops back west to a region just south of Greenland where it cools and sinks to the ocean floor – because it has become denser than the surrounding and less salty North Atlantic waters (colder water is denser than warmer water, and saltier water is denser than fresher water of equal temperature). The dense highly salted descending water then runs as a cold deep ocean current south along the east coast of South America, and continues in a complicated path along the ocean floor into the Pacific Ocean, where it warms and eventually rises to become a surface current of more buoyant less salty water. This current distributes solar heat collected by ocean waters in tropical latitudes to higher latitudes (closer to the poles).

In 2004, Peter Schwartz and Douglas Randall described the thermohaline cycle this way: “In this thousand-year cycle, water from the surface in tropical areas becomes more saline through evaporation. When it circulates to the poles and becomes cold (“thermo”), the greater density still present from higher salt (“haline”) concentration causes the water to sink to great depths. As with most large-scale geological processes, the thermohaline cycle is not thoroughly understood. Wallace Broecker has been studying the cycle for decades and, according to the December 1996 issue of Discover magazine, he has shown that the thermohaline cycle has not always been in operation, and that it has a strong effect on global climate.”

In 2003-2004, the US Department of Defense commissioned a secret study of what might be the worst possible effects of Global Warming triggering an “abrupt climate change” in the near future, in order to estimate the potential liabilities that military planning would have to consider (to maintain US security, and global power). This study was conducted during the climate-change-denying George W. Bush Administration. When the existence of the resulting report, produced by independent researchers Peter Schwartz and Douglas Randall, became publicly known there was such a public outcry (bad PR for the DOD) that the report was declassified and made publicly available.

The Schwartz-Randall report pointed to the abrupt onset of a significantly colder, dryer climate in the Northern Hemisphere as the most perilous possible consequence of Global Warming up to about 2010, because such warming (the trapping of incoming solar radiation and outgoing infrared radiation from the land and oceans, by greenhouse gases in the atmosphere) might cause the thermohaline cycle to stop. How? Global Warming causes glaciers and ice caps to melt, and such fresh (unsalted) meltwater from Greenland floods into the North Atlantic where the thermohaline current dives to the ocean floor. This fresh surface water dilutes the high salinity of the presently descending thermohaline current, making its waters less dense (less heavy) and so less likely to sink. Sufficient freshening of the thermohaline current would cause it to stop entirely, shutting off this global conveyor belt of climate-regulating oceanic solar heat.

Though abrupt climate change is a less likely and worst case scenario as compared to gradual climate change, Schwartz and Randall concluded that such an occurrence would “challenge United States national security in ways that should be considered immediately.” The climatic cooling that might occur in the Northern Hemisphere as a result of a collapse of the thermohaline cycle could be like the century-long period 8,200 years ago with temperature 5 °F (2.8 °C) colder, or the 13 century-long period 12,700 years ago with temperature 27 °F (15 °C) colder. The shift to colder climate could occur as rapidly as 5 °F (2.8 °C) of cooling per decade. So, the world could plunge into a new Ice Age within a period of twenty years. In their 2004 report, Schwartz and Randall showed data on the salinity of the North Atlantic since 1960; the trend was a steady freshening. (I wrote about the above in an article for the Internet, in July 2004).

A 2015 scientific publication of new observations on the “Atlantic Meridional Overturning Circulation” (the Atlantic part of our thermohaline cycle) concluded that “the melting Greenland ice sheet is likely disturbing the circulation.” The news article ( about this study [Rahmstorf, S., Box, J., Feulner, G., Mann, M., Robinson, A., Rutherford, S., Schaffernicht, E. (2015): “Evidence for an exceptional 20th-Century slowdown in Atlantic Ocean overturning.” Nature Climate Change (the journal)] concluded:

“The scientists certainly do not expect a new ice age, thus the imagery of the ten-year-old Hollywood blockbuster ‘The Day After Tomorrow’ is far from reality. However, it is well established that a large, even gradual change in Atlantic ocean circulation could have major negative effects. ‘If the slowdown of the Atlantic overturning continues, the impacts might be substantial,’ says Rahmstorf. ‘Disturbing the circulation will likely have a negative effect on the ocean ecosystem, and thereby fisheries and the associated livelihoods of many people in coastal areas. A slowdown also adds to the regional sea-level rise affecting cities like New York and Boston. Finally, temperature changes in that region can also influence weather systems on both sides of the Atlantic, in North America as well as Europe.’ If the circulation weakens too much it can even break down completely – the Atlantic overturning has for long been considered a possible tipping element in the Earth System. This would mean a relatively rapid and hard-to-reverse change.”

On April 11, 2018, an article titled “Stronger evidence for a weaker Atlantic overturning” appeared at ( This article notes:

“The Atlantic overturning—one of Earth’s most important heat transport systems, pumping warm water northward and cold water southward—is weaker today than any time before in more than 1000 years. Sea surface temperature data analysis provides new evidence that this major ocean circulation has slowed down by roughly 15 percent since the middle of the 20th century, according to a study published in the highly renowned journal Nature by an international team of scientists. Human-made climate change is a prime suspect for these worrying observations. There have been long debates whether the Atlantic overturning could collapse, being a tipping element in the Earth system. The present study does not consider the future fate of this circulation, but rather analyses how it has changed over the past hundred years. Nevertheless, Robinson cautions: ‘If we do not rapidly stop global warming, we must expect a further long-term slowdown of the Atlantic overturning. We are only beginning to understand the consequences of this unprecedented process—but they might be disruptive.’ Several studies have shown, for example, that a slowdown of the Atlantic overturning exacerbates sea-level rise on the US coast for cities like New York and Boston. Others show that the associated change in Atlantic sea surface temperatures affects weather patterns over Europe, such as the track of storms coming off the Atlantic. Specifically, the European heat wave of summer 2015 has been linked to the record cold in the northern Atlantic [caused by the inflow of cold Greenland meltwater] in that year—this seemingly paradoxical effect occurs because a cold northern Atlantic promotes an air pressure pattern that funnels warm air from the south into Europe.”

While the scientists are not being alarmist Jeremiahs and warning of an imminent climapocalypse as depicted in the Hollywood movie “The Day After Tomorrow,” they nevertheless make it clear that if this Global Warming caused (fossil-fuel-burning human caused) slowing of the thermohaline cycle continues to the point of a dead stop, then this would likely be a tipping point of the entire Earth System of climate leading to “a relatively rapid and hard-to-reverse change” — not for the better.


Thirsty Invaders, Chasing Heat
19 July 2004
Manuel García, Jr.


Now appearing at Counterpunch:

An Oceanic Problem: the Atlantic Overturning Current is Slowing
13 April 2017


Schwartz-Randall report


The Atlantic Overturning Current Is Slowing
20 April 2018


Einstein-Hawking-Pi Day #1

March 14 is given the name “Pi Day” by many mathematics enthusiasts because the numerical calendar label “3/14,” for the month (March) then day (the 14th), coincides with the first three digits of the irrational number Pi, 3.14159…

I have called today, the 14th of March, 2018, “Einstein-Hawking-Pi Day #1” because it is the first instance of the triple ‘resonance’ of: an anniversary of Albert Einstein’s birthday, the actual date of Stephen Hawking’s death, and a Pi Day.

I commemorated the day by taking photographs of living eternity, that is to say eternal (so far as we are concerned) principles expressing themselves in radiant instants of life-giving beauty.

I added 10 of these photos (at maximum resolution) to my Flickr site, for display. They are the first ten scenics “from top down” at the following webpage. You can view them, and get technical details there.

MG,Jr. Photostream



The Anthropocene’s Birthday

The Anthropocene’s Birthday, or the birthyear of human-accelerated climate change.

Scientists have found a major spike in the amount of Carbon-14 within the tree rings of “The Loneliest Tree In The World,” which ring corresponds to October-December 1965.

This tree is a Sitka Spruce, a species from the American Northwest (and into Canada) that was planted on Campbell Island in 1901 (or 1905), which island is in the Southern Ocean about 400 miles south of the southern tip of New Zealand.

There are no other trees on Campbell Island, just low scrubs. Since the next landmass south of Campbell Island is Antarctica, this tree is the furthest one south on Earth (so far as I can tell). The next closest tree is north about 170 miles, on another small island south of New Zealand.

The significance of this finding is that geologists now know that the start of the Anthropocene – which is the geological Epoch (after the Holocene Epoch) when GLOBAL (not just local) climate is clearly being influenced by human activity and at an accelerating rate – began in 1965. The Holocene Epoch occurred from -11,700 to 1965.

The Carbon-14 marker is from the radioactive fallout from atmospheric nuclear bomb testing, which grew from 1945 and peaked in 1962, after which it stopped in 1963 as a result of the Test Ban treaty of that year (except for a few isolated atmospheric tests since).

The accumulated radioactive fallout from the massive testing of the 1950s and early 1960s (with a huge amount in 1962) had finally spread out uniformly through the global atmosphere, and the Carbon-14 from that fallout was being infused into trees globally through the process of photosynthesis.

So, this spike in tree-ring Carbon-14 in 1965 is a GLOBAL marker of human activity on global climate, and thus marks the ‘birthday’ of human-induced/accelerated Climate Change.

Coring “the loneliest tree in the world”

The geophysical transition of 1965, noted above, was imperceptible to the human senses, but it is a very significant event/transition in the history of Planet Earth.

You should easily be able to find internet sources giving all the scientific details including charts/graphs of the actual Carbon-14 signature (of the subject tree) over time, which clearly displays the spike during 1965. This same spike was found in trees sampled in the Northern Hemisphere as well, and since there was the same marker on trees globally – for the first time – it was clear the spike indicated a uniformly global effect. And that effect was caused by humans. Hence, the birth of the Anthropocene.

Geologists are now updating their table of geological supereon-eon-era-period-epoch-age, and all textbooks will have to be updated. The last Epoch (the Holocene) of the Quaternary Period extended from 11,700 years ago, when the last glacial retreat was clearly accelerating and the Ice Ages were over, and 1965 when humanity now had leverage on the global climate: the Anthropocene.

When will the next Epoch begin, and how will it be determined (and will there be any ‘who’ to do so)?


The part of this posting down to and including the weblink to the tree coring video were published online at Counterpunch, see below.

The Anthropocene’s Birthday, or the Birth-Year of Human-Accelerated Climate Change
22 February 2018
by Manuel García, Jr.


Proton Beam Driven Electron MHD

MHD 1984
[“MHD 1984” is a link to a PDF file of the following.]


Addendum, 7 March 2018

Alfvén’s magnetic pumping (by hydromagnetic or Alfvén waves), and hydromagnetic shocks are now recognized by computational physical science.

Scientists crack 70-year-old mystery of how magnetic waves heat the sun
March 6, 2018, Queen’s University Belfast


In 1942, Swedish physicist and engineer Hannes Alfvén presented his theory of hydromagnetic waves, for which he won the Nobel Prize in physics in 1970. Between 1983 and 1985, I tried to convince my supposed colleagues and science-bureaucrat superiors (timorous bosses and climbers at Livermore, and also Los Alamos) to study this type of magnetic wave phenomenon, after I found ample evidence of it in very elaborate computer simulations of nuclear explosions (simulations of which was all they knew how to do) driving moderately relativistic magnetohydrodynamics (which they did not recognize nor understand at all). So, I enjoyed reading about the new advances in hydromagnetic wave physics made by the Queen’s University Belfast. My cartoon physics monograph on the subject (to the extent I could understand it), from 1983-1985, is available at my blog under the title “Proton Beam Driven Electron MHD.” One of my favorite works ever.


The Thermodynamics of 9-11

When hijacked airliners crashed into the tall Towers of the World Trade Center, in New York City [on 11 September 2001], each injected a burning cloud of aviation fuel throughout the 6 levels (WTC 2) to 8 levels (WTC 1) in the impact zone. The burning fuel ignited the office furnishings: desks, chairs, shelving, carpeting, work-space partitions, wall and ceiling panels; as well as paper and plastic of various kinds.

How did these fires progress? How much heat could they produce? Was this heat enough to seriously weaken the steel framework? How did this heat affect the metal in the rubble piles in the weeks and months after the collapse? This report is motivated by these questions, and it will draw ideas from thermal physics and chemistry. My previous report on the collapses of the WTC Towers described the role of mechanical forces (1).

Summary of National Institute of Technology and Standards (NIST) Report

Basic facts about the WTC fires of 9/11/01 are abstracted by the numerical quantities tabulated here.

Table 1, Time and Energy of WTC Fires

ITEM                              WTC 1           WTC 2
impact time (a.m.)          8:46:30          9:02:59
collapse (a.m.)               10:28:22        9:58:59
time difference               1:41:52          0:56:00
impact zone levels          92-99            78-83
levels in upper block       11                 27
heat rate (40 minutes)     2 GW            1 GW
total heat energy             8000 GJ       3000 GJ

Tower 1 stood for one hour and forty-two minutes after being struck between levels 92 and 99 by an airplane; the block above the impact zone had 11 levels. During the first 40 minutes of this time, fires raged with an average heat release rate of 2 GW (GW = giga watts = 10^9 watts), and the total heat energy released during the interval between airplane impact and building collapse was 8000 GJ (GJ = giga-joules = 10^9 joules).

A joule is a unit of energy; a watt is a unit of power; and one watt equals an energy delivery rate of one joule per second.

Tower 2 stood for fifty-six minutes after being struck between levels 78 and 83, isolating an upper block of 27 levels. The fires burned at a rate near 1 GW for forty minutes, diminishing later; and a total of 3000 GJ of heat energy was released by the time of collapse.

WTC 2 received half as much thermal energy during the first 40 minutes after impact, had just over twice the upper block mass, and fell within half the time than was observed for WTC 1. It would seem that WTC 1 stood longer despite receiving more thermal energy because its upper block was less massive.

The data in Table 1 are taken from the executive summary of the fire safety investigation by NIST (2).

The NIST work combined materials and heat transfer lab experiments, full-scale tests (wouldn’t you like to burn up office cubicles?), and computer simulations to arrive at the history and spatial distribution of the burning. From this, the thermal histories of all the metal supports in the impact zone were calculated (NIST is very thorough), which in turn were used as inputs to the calculations of stress history for each support. Parts of the structure that were damaged or missing because of the airplane collision were accounted for, as was the introduction of combustible mass by the airplane.

Steel loses strength with heat. For the types of steel used in the WTC Towers (plain carbon, and vanadium steels) the trend is as follows, relative to 100% strength at habitable temperatures.

Table 2, Fractional Strength of Steel at Temperature

Temperature, degrees C      Fractional Strength, %
200                                     86
400                                     73
500                                     66
600                                     43
700                                     20
750                                     15
800                                     10

I use C for Centigrade, F for Fahrenheit, and do not use the degree symbol in this report.

The fires heated the atmosphere in the impact zone (a mixture of gases and smoke) to temperatures as high as 1100 C (2000 F). However, there was a wide variation of gas temperature with location and over time because of the migration of the fires toward new sources of fuel, a complicated and irregular interior geometry, and changes of ventilation over time (e.g., more windows breaking). Early after the impact, a floor might have some areas at habitable temperatures, and other areas as hot as the burning jet fuel, 1100 C. Later on, after the structure had absorbed heat, the gas temperature would vary over a narrower range, approximately 200 C to 700 C away from centers of active burning.

As can be seen from Table 2, steel loses half its strength when heated to about 570 C (1060 F), and nearly all once past 700 C (1300 F). Thus, the structure of the impact zone, with a temperature that varies between 200 C and 700 C near the time of collapse, will only have between 20% to 86% of its original strength at any location.

The steel frames of the WTC Towers were coated with “sprayed fire resistant materials” (SFRMs, or simply “thermal insulation”). A key finding of the NIST Investigation was that the thermal insulation coatings were applied unevenly — even missing in spots — during the construction of the buildings, and — fatally — that parts of the coatings were knocked off by the jolt of the airplane collisions.

Spraying the lumpy gummy insulation mixture evenly onto a web of structural steel, assuming it all dries properly and none is banged off while work proceeds at a gigantic construction site over the course of several years, is an unrealistic expectation. Perhaps this will change, as a “lesson learned” from the disaster. The fatal element in the WTC Towers story is that enough of the thermal insulation was banged off the steel frames by the airplane jolts to allow parts of frames to heat up to 700 C. I estimate the jolts at 136 times the force of gravity at WTC 1, and 204 at WTC 2.

The pivotal conclusion of the NIST fire safety investigation is perhaps best shown on page 32, in Chapter 3 of Volume 5G of the Final Report (NIST NCSTAR 1-5G WTC Investigation), which includes a graph from which I extracted the data in Table 2, and states the following two paragraphs. (The NIST authors use the phrase “critical temperature” for any value above about 570 C, when steel is below half strength.)


“As the insulation thickness decreases from 1 1/8 in. to 1/2 in., the columns heat up quicker when subjected to a constant radiative flux. At 1/2 in. the column takes approximately 7,250 s (2 hours) to reach a critical temperature of 700 C with a gas temperature of 1,100 C. If the column is completely bare (no fireproofing) then its temperature increases very rapidly, and the critical temperature is reached within 350 s. For a bare column, the time to reach a critical temperature of 700 C ranges between 350 to 2,000 s.

“It is noted that the time to reach critical temperature for bare columns is less than the one hour period during which the buildings withstood intense fires. Core columns that have their fireproofing intact cannot reach a critical temperature of 600 C during the 1 or 1 1/2 hour period. (Note that WTC 1 collapsed in approximately 1 1/2 hour, while WTC 2 collapsed in approximately 1 hour). This implies that if the core columns played a role in the final collapse, some fireproofing damage would be required to result in thermal degradation of its strength.” (3)



Airplane impact sheared columns along one face and at the building’s core. Within minutes, the upper block had transferred a portion of its weight from central columns in the impact zone, across a lateral support at the building crown called the “hat truss,” and down onto the three intact outer faces. Over the course of the next 56 minutes (WTC 2) and 102 minutes (WTC 1) the fires in the impact zone would weaken the remaining central columns, and this steadily increased the downward force exerted on the intact faces. The heat-weakened frames of the floors sagged, and this bowed the exterior columns inward at the levels of the impact zone. Because of the asymmetry of the damage, one of the three intact faces took up much of the mounting load. Eventually, it buckled inward and the upper block fell. (1)

Now, let’s explore heat further.

How Big Were These Fires?

I will approximate the size of a level (1 story) in each of the WTC Towers as a volume of 16,080 m^3 with an area of 4020 m^2 and a height of 4 m (4). Table 3 shows several ways of describing the total thermal energy released by the fires.

Table 3, Magnitude of Thermal Energy in Equivalent Weight of TNT

ITEM                                  WTC 1              WTC 2
energy (Q)                          8000 GJ           3000 GJ
# levels                              8                       6
tons of TNT                       1912                 717
tons/level                           239                  120
lb/level                               478,000           239,000
kg/m^2 (impact floors)       54                    27
lb/ft^2 (impact floors)         11                    6

The fires in WTC 1 released an energy equal to that of an explosion of 1.9 kilotons of TNT; the energy equivalent for WTC 2 is 717 tons. Obviously, an explosion occurs in a fraction of a second while the fires lasted an hour or more, so the rates of energy release were vastly different. Even so, this comparison may sharpen the realization that these fires could weaken the framework of the buildings significantly.

How Hot Did The Buildings Become?

Let us pretend that the framework of the building is made of “ironcrete,” a fictitious mixture of 72% iron and 28% concrete. This framework takes up 5.4% of the volume of the building, the other 94.6% being air. We assume that everything else in the building is combustible or an inert material, and the combined mass and volume of these are insignificant compared to the mass and volume of ironcrete. I arrived at these numbers by estimating volumes and cross sectional areas of metal and concrete in walls and floors in the WTC Towers.

The space between floors is under 4 meters; and the floors include a layer of concrete about 1/10 meter thick. The building’s horizontal cross-section was a 63.4 meter square. Thus, the gap between floors was nearly 1/10 of the distance from the center of the building to its periphery. Heat radiated by fires was more likely to become trapped between floors, and stored within the concrete floor pans, than it was to radiate through the windows or be carried out through broken windows by the flow of heated air. We can estimate a temperature of the framework, assuming that all the heat became stored in it.

The amount of heat that can be stored in a given amount of matter is a property specific to each material, and is called heat capacity. The ironcrete mixture would have a volumetric heat capacity of Cv = 2.8*10^6 joules/(Centigrade*m^3); (* = multiply). In the real buildings, the large area of the concrete pads would absorb the heat from the fires and hold it, since concrete conducts heat very poorly. The effect is to bath the metal frame with heat as if it were in an oven or kiln. Ironcrete is my homogenization of materials to simplify this numerical example.

The quantity of heat energy Q absorbed within a volume V of material with a volumetric heat capacity Cv, whose temperature is raised by an amount dT (for “delta-T,” a temperature difference) is Q = Cv*V*dT. We can solve for dT. Here, V = (870 m^3)*(# levels); also dT(1) corresponds to WTC 1, and dT(2) corresponds to WTC 2.

dT(1) = (8 x 10^12)/[(2.8 x 10^6)*(870)*8] = 410 C,

dT(2) = (3 x 10^12)/[(2.8 x 10^6)*(870)*6] = 205 C.

Our simple model gives a reasonable estimate of an average frame temperature in the impact zone. The key parameter is Q (for each building). NIST spent considerable effort to arrive at the Q values shown in Table 3 (3). Our model gives a dT comparable to the NIST results because both calculations deposit the same energy into about the same amount of matter. Obviously, the NIST work accounts for all the details, which is necessary to arrive at temperatures and stresses that are specific to every location over the course of time. Our equation of heat balance Q = Cv*V*dT is an example of the conservation of energy, a fundamental principle of physics.

Well, Can The Heat Weaken The Steel Enough?

On this, one either believes or one doesn’t believe. Our simple example shows that the fires could heat the frames into the temperature range NIST calculates. It seems entirely reasonable that steel in areas of active and frequent burning would experience greater heating than the averages estimated here, so hotspots of 600 C to 700 C seem completely believable. Also, the data for WTC Towers steel strength at elevated temperatures is not in dispute. I believe NIST; answer: yes.

Let us follow time through a sequence of thermal events.


The airplanes hurtling into the buildings with speeds of at least 200 m/s (450 mph) fragmented into exploding torrents of burning fuel, aluminum and plastic. Sparks generated from the airframe by metal fracture and impact friction ignited the mixture of fuel vapor and air. This explosion blew out windows and billowed burning fuel vapor and spray throughout the floors of the impact zone, and along the stairwells and elevator shafts at the center of the building; burning liquid fuel poured down the central shafts. Burning vapor, bulk liquid and droplets ignited most of what they splattered upon. The intense infrared radiation given off by the 1100 C (2000 F) flames quickly ignited nearby combustibles, such as paper and vinyl folders. Within a fraction of a second, the high pressure of the detonation wave had passed, and a rush of fresh air was sucked in through window openings and the impact gash, sliding along the tops of the floors toward the centers of intense burning.

Hot exhaust gases: carbon monoxide (CO), carbon dioxide (CO2), water vapor (H2O), soot (carbon particles), unburned hydrocarbons (combinations with C and H), oxides of nitrogen (NOx), and particles of pulverized solids vented up stairwells and elevator shafts, and formed thick hot layers underneath floors, heating them while slowly edging toward the openings along the building faces. Within minutes, the aviation fuel was largely burned off, and the oxygen in the impact zone depleted.

Thermal Storage

Fires raged throughout the impact zone in an irregular pattern dictated by the interplay of the blast wave with the distribution of matter. Some areas had intense heating (1100 C), while others might still be habitable (20 C). The pace of burning was regulated by the area available for venting the hot exhaust gases, and the area available for the entry of fresh air. Smoke was cleared from the impact gash by air entering as the cycle of flow was established. The fires were now fueled by the contents of the buildings.

Geometrically, the cement floors had large areas and were closely spaced. They intercepted most of the infrared radiation emitted in the voids between them, and they absorbed heat (by conduction) from the slowly moving (“ventilation limited”) layer of hot gases underneath each of them. Concrete conducts heat poorly, but can hold a great deal of it. The metal reinforcing bars within concrete, as well as the metal plate underneath the concrete pad of each WTC Towers floor structure, would tend to even out the temperature distribution gradually.

This process of “preheating the oven” would slowly raise the average temperature in the impact zone while narrowing the range of extremes in temperature. Within half an hour, heat had penetrated to the interior of the concrete, and the temperature everywhere in the impact zone was between 200 C and 700 C, away from sites of active burning.

Thermal Decomposition — “Cracking”

Fire moved through the impact zone by finding new sources of fuel, and burning at a rate limited by the ventilation, which changed over time.

Heat within the impact zone “cracks” plastic into a sequence of decreasingly volatile hydrocarbons, similar to the way heat separates out an array of hydrocarbon fuels in the refining of crude oil. As plastic absorbs heat and begins to decompose, it emits hydrocarbon vapors. These may flare if oxygen is available and their ignition temperatures are reached. Also, plumes of mixed hydrocarbon vapor and oxygen may detonate. So, a random series of small explosions might occur during the course of a large fire.

Plastics not designed for use in high temperature may resemble soft oily tar when heated to 400 C. The oil in turn might release vapors of ethane, ethylene, benzene and methane (there are many hydrocarbons) as the temperature climbs further. All these products might begin to burn as the cracking progresses, because oxygen is present and sources of ignition (hotspots, burning embers, infrared radiation) are nearby. Soot is the solid end result of the sequential volatilization and burning of hydrocarbons from plastic. Well over 90% of the thermal energy released in the WTC Towers came from burning the normal contents of the impact zones.

Hot Aluminum

Aluminum alloys melt at temperatures between 475 C and 640 C, and molten aluminum was observed pouring out of WTC 2 (5). Most of the aluminum in the impact zone was from the fragmented airframe; but many office machines and furniture items can have aluminum parts, as can moldings, fixtures, tubing and window frames. The temperatures in the WTC Towers fires were too low to vaporize aluminum; however, the forces of impact and explosion could have broken some of the aluminum into small granules and powder. Chemical reactions with hydrocarbon or water vapors might have occurred on the surfaces of freshly granulated hot aluminum.

The most likely product of aluminum burning is aluminum oxide (Al2O3, “alumina”). Because of the tight chemical bonding between the two aluminum atoms and three oxygen atoms in alumina, the compound is very stable and quite heat resistant, melting at 2054 C and boiling at about 3000 C. The affinity of aluminum for oxygen is such that with enough heat it can “burn” to alumina when combined with water, releasing hydrogen gas from the water,

2*Al + 3*H2O + heat -> Al2O3 + 3*H2.

Water is introduced into the impact zone through the severed plumbing at the building core, moisture from the outside air, and it is “cracked” out of the gypsum wall panels and to a lesser extent from concrete (the last two are both hydrated solids). Water poured on an aluminum fire can be “fuel to the flame.”

When a mixture of aluminum powder and iron oxide powder is ignited, it burns to iron and aluminum oxide,

Al + Fe2O3 + ignition -> Al2O3 + Fe.

This is thermite. The reaction produces a temperature that can melt steel (above 1500 C, 2800 F). The rate of burning is governed by the pace of heat diffusion from the hot reaction zone into the unheated powder mixture. Granules must absorb sufficient heat to arrive at the ignition temperature of the process. The ignition temperature of a quiescent powder of aluminum is 585 C. The ignition temperatures of a variety of dusts were found to be between 315 C and 900 C, by scientists developing solid rocket motors. Burning thermite is not an accelerating chain reaction (“explosion”), it is a “sparkler.” My favorite reference to thermite is in the early 1950s motion picture, “The Thing.”

Did patches of thermite form naturally, by chance, in the WTC Towers fires? Could there really have been small bits of melted steel in the debris as a result? Could there have been “thermite residues” on pieces of steel dug out of the debris months later? Maybe, but none of this leads to a conspiracy. If the post-mortem “thermite signature” suggested that a mass of thermite comparable to the quantities shown in Table 3 was involved, then further investigation would be reasonable. The first task of such an investigation would be to produce a “chemical kinetics” model of the oxidation of the fragmented aluminum airframe, in some degree of contact to the steel framing, in the hot atmosphere of hydrocarbon fires in the impact zone. Once Nature had been eliminated as a suspect, one could proceed to consider Human Malevolence.

Smoldering Rubble

Nature is endlessly creative. The deeper we explore, the more questions we come to realize.

Steel columns along a building face, heated to between 200 C and 700 C, were increasingly compressed and twisted into a sharpening bend. With increasing load and decreasing strength over the course of an hour or more, the material became unable to rebound elastically, had the load been released. The steel entered the range of plastic deformation, it could still be stretched through a bend, but like taffy it would take on a permanent set. Eventually, it snapped.

Months later, when this section of steel would be dug out of the rubble pile, would the breaks have the fluid look of a drawn out taffy, or perhaps “melted” steel now frozen in time? Or, would these be clean breaks, as edge glass fragments; or perhaps rough, granular breaks as through concrete?

The basements of the WTC Towers included car parks. After the buildings collapsed, it is possible that gasoline fires broke out, adding to the heat of the rubble. We can imagine many of the effects already described, to have occurred in hot pockets within the rubble pile. Water percolating down from that sprayed by the Fire Department might carry air down also, and act as an oxidizing agent.

The tight packing of the debris from the building, and the randomization of its materials would produce a haphazard and porous form of ironcrete aggregate: chunks of steel mixed with broken and pulverized concrete, with dust-, moisture-, and fume-filled gaps. Like a pyramid of barbecue briquettes, the high heat capacity and low thermal conductivity of the rubble pile would efficiently retain its heat.

Did small hunks of steel melt in rubble hot spots that had just the right mix of chemicals and heat? Probably unlikely, but certainly possible.

Pulverized concrete would include that from the impact zone, which may have had part of its water driven off by the heat. If so, such dust would be a desiccating substance (as is Portland cement prior to use; concrete is mixed sand, cement and water). Part of the chronic breathing disorders experienced by many people exposed to the atmosphere at the World Trade Center during and after 9/11/01 may be due to the inhalation of desiccating dust, now lodged in lung tissue.

Did the lingering hydrocarbon vapors and fumes from burning dissolve in water and create acid pools? Did the calcium-, silicon-, aluminum-, and magnesium-oxides of pulverized concrete form salts in pools of water? Did the sulfate from the gypsum wall panels also acidify standing water? Did acids work on metal surfaces over months, to alter their appearance?

In the enormity of each rubble pile, with its massive quantity of stored heat, many effects were possible in small quantities, given time to incubate. It is even possible that in some little puddle buried deep in the rubble, warmed for months in an oven-like enclosure of concrete rocks, bathed in an atmosphere of methane, carbon monoxide, carbon dioxide, and perhaps a touch of oxygen, that DNA was formed.


[1] MANUEL GARCIA, Jr., “The Physics of 9/11,” Nov. 28, 2006, [search in the Counterpunch archives of November, 2006 for this report and its two companions; one on the mechanics of building collapse, and the other an early and not-too-inaccurate speculative analysis of the fire-induced collapse of WTC 7.]

[2] “Executive Summary, Reconstruction of the Fires in the World Trade Center Towers,” NIST NCSTAR 1-5, , (28 September 2006). NIST = National Institute of Standards and Technology, NCSTAR = National Construction Safety Team Advisory Committee.

[3] “Fire Structure Interface and Thermal Response of the World Trade Center Towers,” NIST NCSTAR1-5G, (draft supporting technical report G),, (28 September 2006), Chapter 3, page 32 (page 74 of 334 of the electronic PDF file).

[4] 1 m = 3.28 ft;    1 m^2 = 10.8 ft^2;    1 m^3 = 35.3 ft^3;    1 ft = 0.31 m;    1 ft^2 = 0.93 m^2;    1 ft^3 = 0.28 m^3.

[5] “National Institute of Standards and Technology (NIST) Federal Building and Fire Safety Investigation of the World Trade Center Disaster, Answers to Frequently Asked Questions,” (11 September 2006).


This article originally appeared as:

The Thermodynamics of 9/11
28 November 2006


Beam Me Up! (With Fossil Fuels?)


This article originally appeared as:

The Fossil Fuel Paradigm
25 October 2013


“Beam me up, Scotty.” That phrase is as well known to science fiction aficionados as “Gort, Klaatu barada nikto.”

James Tiberius Kirk, the lead character and commanding officer in the futuristic space fantasy television series Star Trek (1966-1969) would call through his wireless communicator for his chief engineer Montgomery Scott to initiate the process of “energizing” him, to be instantly converted into pure energy, and propagated — “transported” — from a planetary surface or another spaceship back to Kirk’s own spaceship the Enterprise where he would be returned to his bodily form.

The popularity of the Star Trek series and its many sequels, spin-offs, imitations and entertaining derivatives all show how entrancing people find the idea of being able to pursue their private dramas with unlimited energy and unflagging power at their disposal, literally at the push of a button. And, one of the most attractive fantasies about having such power would be the ability to hop in a flash across great distances at a moment’s notice: the transporter.

Today as our fossil fuel diggers frack their way under the skin of Planet Earth with their noses pressed tight against the grindstone of profitability, and we burn up oil squeezed out of tar sands and coal hollowed out of mountains to keep up the high-powered freneticism of modern times, dismissing concerns about increasingly turbid choking cancerous air (as in Harbin, China) and global warming with its negative effects on the polar regions, on oceans and marine life, and on weather and climate, the longed-for science fiction fantasy of unlimited kilowatts and unlimited horsepower without undue environmental consequences can seem so cruelly distant. Why can’t we have that now? When will we get it?

In our (humanity’s) attachment to the fossil fuel paradigm, too many of us find it so much easier to imagine how we would employ unlimited push-button power for our expanding and instantaneous personal wants, instead of imagining how to fashion lives of timeless fulfillment liberated from fabricated desires, and expressed with elegant and graceful efficiency.

Given all that, I though it would be interesting to consider the physics problem of building a “beam me up” transporter. To start this speculative analysis, let us consider the energy and power needed to convert a 70 kilogram (154 pound) person into pure energy for electromagnetic transport.

First, a few words about notation:

The symbol x means multiply.

The symbol ^ means exponent (of ten).

The unit of mass is a kilogram, with symbol kg. 1 kg = 2.20462 pounds.

The unit of energy is a joule, with symbol J.

1 Exajoule = 10^18 joules = 1 EJ.

The unit of power is a watt, with symbol W.

1 joule/second = 1 J/s = 1 watt = 1 W.

1 Kilowatt = 1 kW = 10^3 W.

1 Terawatt = 1 TW = 10^12 W.

1 Exawatt = 1 EW = 10^18 W.

3,600,000 J = 1 kilowatt x 1 hour = 1 kWh.

Albert Einstein famously showed that mass (m) and energy (E) are two aspects of a single entity, mass-energy, and that the pure energy equivalent of a given mass is E = m x c^2, where c is the speed of light (c = 3 x 10^8 meters/second, in vacuum).

The physical universe is 13.8 billion years old (since the Big Bang) and presently has an extent (distance to the event horizon) of 1.3×10^23 kilometers. The total mass-energy in the universe can be stated as a mass equivalent of 4.4×10^52 kg, or an energy equivalent of 4×10^69 joules.

A 70 kg mass, whether a living person of just inert stuff, has a pure energy equivalent, by Einstein’s formula, of 6.3×10^18 joules (6.3 EJ). So, our desired transporter must supply at least 6.3 EJ to beam a 70 kg mass.

For comparison, the total US energy use in 2008 was 95.7 EJ, and the total world energy use in 2008 was 474 EJ. The combined pure energy equivalents of 15.2 people of 70 kg equals the total US energy use in 2008. Similarly, the combined mass-energy of 75.4 such people is equivalent to the world energy consumption that year.

Given that there are 3.15569×10^7 seconds in one year, we can calculate the average rate of energy use during 2008 (the power generated) in the U.S.A. as 3 TW, and in the world as 15 TW.

At the US power rate, it would take 24 days to convert one 70 kg individual or object into pure energy for transport if the entire national power output were devoted to this task. If the entire world were yoked to this purpose, it would take 4.9 days.

Aside from considerations of monopolizing national and world power consumption, the idea of “disassembling” a living person and converting them to pure energy over the course of one to three weeks seems unappealing long. How do we assure we don’t lose the life whose bodily form is being disassembled and dematerialized so slowly? The whole point of a transporter is to achieve near instantaneous relocation.

For the sake of simplicity we will continue a little bit further with the convenient assumption that a 70 kg transport, whether of a human being or a lump of lead, only requires 6.3 EJ. This implies 100% efficiency of mass conversion to energy, and that no extra energy is required to collect the information needed to materially reconstruct the individual or object on arrival, rather than just deliver a 70 kg puddle of gunk.

If this transporter were to accomplish the 70 kg conversion process in 24 hours exactly (86400 seconds), it would have a power rating of 6.3 EJ/day or 72.8 TW. This is a much higher power consumption than the US national average (3 TW). To operate such a transporter would require an energy storage system with a capacity of at least 6.3 EJ to feed the transporter (discharging over a 24 hour period), and which storage system would be charged up over a longer period prior to transport.

Obviously, if we could build transporters of increased power, the conversion would occur in less time. Thus, a transporter that could convert the 70 kg traveler to pure energy within one hour would operate at 1,747 TW (and draw power from the storage bank at that rate). A 1 minute transport conversion would require 104,846 TW. A 5 second transport converter would require 1,258,157 TW (1.26 EW). For any of these machines, it would take 24 days of total US power generation to store up the energy required for one transport, or almost 5 days of total world power generation.

The power generated on Planet Earth, in reality not science fiction, is just not enough for a transporter. Why not use the power of the Sun?

The Sun’s luminosity is 384.6×10^6 EW. If totally harnessed, it would take the Sun 16.4 nanoseconds to supply the 6.3 EJ needed for our 70 kg transport converter. A 5 second (1.26 EW) transport converter could be powered from only 3.3 billionths of the Sun’s luminosity.

The solar mean distance to Earth is 1.496×10^8 km, which is used as a convenient unit of distance in descriptions of the Solar System, and known as 1 AU (one astronomical unit).

A disc 34,224 km in diameter at 1 AU would capture the 3.3 billionths of the Sun’s luminosity needed for our 5 second transport converter. That solar collection disc (assumed 100% efficient) would be 2.7 times larger in diameter than the Earth. Since we wouldn’t want to give up our sunshine by using Planet Earth as a solar collector (for the transporter), nor risk shadowing Planet Earth with an oversized collection disc in nearby outer space, it would seem best to have the entire collector and transporter system away at a distance comparable to the Moon. Travelers and cargo from Planet Earth scheduled for deep space transport would first have to shuttle to their embarkation point on the Moon by relatively sedate rocket technology.

Let us return to the question of the extra energy required to collect the information needed to materially reconstruct an individual or object on arrival after beaming. The immense amount of information about the molecular, atomic and sub-atomic bonds and their many dynamic structural arrangements that in total make up the biophysical self of a particular individual will necessarily require a huge investment of energy to ascertain and code electronically.

One can see that such vital information about the actual relationships between particle and cellular forms of matter, which actually form a specific living organism, has an equivalent mass-energy being the sum of the energy required to program the information and then convert that program into transmissible electromagnetic waves. Because a human being is much more complex than the sum of his or her elemental and chemical composition, it is possible that the information mass-energy of a human being will outweigh their bulk mass-energy. Hence, the transport of a 70 kg person that only accounts for the 70 kg of bulk mass will undoubtedly deliver a dead blob of stuff unlikely to even duplicate the original chemical composition. To deliver the same living person, who happens to posses a particular physicality of 70 kg bulk mass, will require much more energy, a vast overhead to account for the great subtlety of living biochemical reality and consciousness. So, perhaps our 70 kg transporter will be able to deliver 70 kg of water, or a 70 kg salt crystal or slab of iron, but only safely transport a much simpler living organism like a small plant or an insect.

Actually, it is only the fully detailed structural code of the individual that would be essential for dematerialized transport. We imagine that such a code would have to be determined by disassembling the materiality of the individual (or object), by “energizing” them. It is then only necessary to transmit the code, not the now destroyed physical materiality converted into pure energy. Otherwise, if such unique structural codes could be determined nondestructively, then the transporter system would advance into being a duplicating system, a 3D cloning printer.

On arrival, the electromagnetic message that is the coded person or object being transported can be rematerialized from energy stored at the destination. Otherwise, the electromagnetic forms of both the structural code and the bulk materiality of the person or object would have to be transmitted, and the materialization at the destination would involve reading the code to use it as a guide in reconverting the beamed-in energy back into the original structured bulk mass.

Other problems for transporter system designers, which we will not explore here, include conversion efficiencies, distortion and loss of signal during propagation, and transport through through solid material.

It seems that we will be earthbound without transporters for quite some time.

Oh, that this too, too sullied flesh would melt,

Thaw, and resolve itself into a dew,

Or that the Everlasting had not fixed

His canon ‘gainst self-slaughter! O God, God!

How weary, stale, flat, and unprofitable

Seem to me all the uses of this world!

Fie on ’t, ah fie! ‘Tis an unweeded garden

That grows to seed. Things rank and gross in nature

Possess it merely. That it should come to this.

Today’s reality may seem so primitive, constricted and decayed in comparison to the fantasy worlds of Star Trek, unbounded by physical science, but perhaps the liberation of the spirit so many imagine through science fiction can be experienced here by having the right attitude rather than just wanting unlimited power.