Rebuke: "On the physics of high-rise building collapses".

Introduction

A while ago, I was alerted to an article by "four physicists" in a "scientific journal". Apparently, these authors had "shown" that the conspiracy is real. Behold:
A European scientific study has concluded that on September 11, 2001, the Twin Towers were brought down by a controlled demolition. The study, conducted by four physicists and published in Europhysics Magazine, says that “the evidence points overwhelmingly to the conclusion that all three buildings were destroyed by controlled demolition.”(your news wire)
 We discussed the article briefly, mostly dismissing it because it is not a scientific study, nor is it by four physicists, and its conclusions do not follow from the evidence.

However, the conspiracy crowd seems real happy with it. The Credible Hulk, for instance, had several `truthers' all over his page (here).

I decided that, as a physicist, it would perhaps be interesting to write a rebuke of this article. Now, I'd like to point out a few points before I start. First, this was published in the member magazine of a non-profit organisation for the promotion of physics and its practitioners. Second, it was not written by four physicists. Steven Jones was a physicist, apparently an outspoken Mormon and a truther. Robert Korol was a civil engineer. Anthony Szamboti is a mechanical engineer. Ted Walter is apparently only affiliated with Architects & Engineers for 9/11 Truth. So, to conclude, the study was by zero physicists. (By my long standing convention of not counting non-practitioners.)

It's also speculative and different from [the magazine]'s purely scientific articles:



On the Physics of high-rise building collapses

The authors start by noting that the (US) National Institute of Standards and Technology, NIST, launched its investigation in 2002. They claim that NIST stated its premise that the WTC towers were the only known cases of total structural collapse where fires played a significant role.

The authors continue to state that, "Indeed, neither before nor since 9/11 have fires caused the total collapse ...". That's interesting, but for a very different reason than you would suppose.  Their source [1], a NIST report conducting a survey of just such historical information, notes:
The results of the world-wide survey indicated that a total of 22 fire-induced collapses were identified spanning from 1970 to the present. The 2001 World Trade Center (WTC) collapses accounted for four of these events. Seven major multi-story fire events were also identified as having significant structural damage due to a fire, but did not exhibit collapse. (NIST)
The authors continue to explain why steel-framed high-rises have endured large fires. While interesting, they continue by re-stating that countless other steel-framed high-rises have not suffered total collapse due to fires, citing the above source. As you see, the source does not seem to back this up. The applied trick seems to be in the wording, as the above quote is about collapsing multi-story buildings, but the authors are only speaking of "steel-framed high-rise" buildings. Steel-framed seems to indicate a building technique (for e.g. skyscrapers) and high-rise is a tall building used as residential or office building.

However, this is not very strong. The reason being that the number of buildings that fit their criteria are about 30 in 1970, 60 in 1980 and about 280 in 2001. As they have just explained, fires typically are not hot enough or last long enough to heat structural members, fire-suppression systems are present, they have fireproofing materials and they are designed to be redundant. So, about 260 buildings existed for some time before 2001; the problem is data bias. You are selecting buildings that are both well-protected and rare. Of course you did not find collapses in this very small selection.

They banter on a bit about how strong the materials are, but then point out that the head structural engineer, John Skilling, explained that the towers had been designed specifically to withstand the impact of a jetliner. While he did state that his people did a calculation showing they could withstand the impact (Seattle Times), it is not clear that this was in the design phase. More importantly, that is the structural damage due to impact. The issue, as various investigations concluded, is that the long-lasting fire with jet fuel added to it caused the material properties of the structure to deteriorate.

And now, they attempt to make Skilling say things he did not. The authors state that "In other words, Skilling believed the only mechanism that could bring down the Twin Towers was controlled demolition.". That's false. Skilling stated that he did not think a 200-pound car bomb would cause major damage, because of design redundancy. However, he thought that properly applied explosives of that magnitude could do it, such as done by the top demolition expert. That does not  imply this is the only mechanism, merely that with a fixed amount (200 pound) of explosives, an expert could do it.

The authors continue to explain thermite demolition, where thermite charges are used to cut the support columns. You cut only on one floor, call it floor C. The floors above C collapse down onto C, crushing the columns that support floor C. The entire building thus collapses down, with each new floor taken up into the 'hammer' made of previously collapsed floors. A slightly more sophisticated method works from the inside-out, so that the collapse moves inward and downward, confining the debris.

While reading that explanation of thermite demolition, the first thing I thought is that a building with many support columns and a sufficiently hot fire would automatically collapse in the sophisticated manner. The reason being that a wide-spread fire is hotter near the centre, so those columns are the first to collapse. This puts more stress on slightly more outward columns, which thus collapse at a lower temperature, and so on.

NIST final report. Downward velocity of the north face
roofline as WTC 7 began to collapse.
Now, in addition to showing the badly made Figure 2, the authors refer to the NIST report (NIST final). They claim the building dropped in "absolute free fall for the first 2.25 seconds of its descent" and that the stasis-free fall transition was "sudden, occurring in approximately one-half second". The cited NIST report whose Figure is reproduced on the right, however, says that the stage I descent was slow and lasted about 1.75 seconds, followed by the free-fall stage 2 descent lasting about 2.25 seconds and the final stage lasted another 1.4 seconds. Clearly, they disagree with their cited source. What is also important is that this is a video measurement of the downward displacement of a point near the centre of the roofline. It is not about the entire collapse; videos taken from different angles show that the collapse started earlier, with the East penthouse collapsing, putting the total collapse at 16 seconds or so (TCH video). This video correlates with Table 3-1 in the final report of NIST.

The NIST report is really rather clear, with the leading hypothesis clearly attributing the collapse of WTC 7 to fires. The fires resulted in a loss of lateral support and the (critical) columns started buckling. The initial local failure progressed up to the east penthouse, as shown in the above video. The combined loss of lateral support and damage due to falling debris led to progression of the failure, ultimately resulting in the collapse of the entire structure.

What is actually quite interesting is that NIST expended considerable effort to compile evidence and determine whether intentionally set explosives might have caused the collapse. Numerous scenarios were investigated, but the lack of breaking windows and sound levels broadcasted by these explosions means this hypothesis lacks confirmation of any kind. Therefore, they concluded that there was no demolition-type blast that would have been intense enough to lead to the collapse.

Back to the authors, we meet the following. It is quite painful, so I'll quite the sentence verbatim:
"Given the nature of the collapse, any investigation adhering to the scientific method should have seriously considered the controlled demolition hypothesis, if not started with it. Instead, NIST (as well as the FEMA, which conducted a preliminary study prior to the NIST investigation), began with the predetermined conclusion that the collapse was caused by fires.
 I've described the developed Leading Hypothesis before. And, we've just discussed the hypothetical blast scenarios. Quite clearly, the investigation adhered to the scientific method and has seriously considered the controlled demolition hypothesis. They have neglected to read sources they cite. 

The authors then carry on about the 'predetermined conclusion'. The preliminary study by FEMA was, apparently, not clear on the specifics. I wonder if the authors know what preliminary means. The authors to continue to claim some things that someone else allegedly said, without citing any sources. They then claim NIST never acknowledged free-fall, until they acknowledged stage II. I'm going to guess that NIST answered the entire collapse took longer than free-fall. They move on to claim that NIST computer models does not show the period of free fall. However, the NIST computations focussed primarily on reproducing empirical features of the collapse. Table 3-1 gives an impression of this. The analysis times recorded there indicate that the observed events are well within the analysis. (If you're wondering why these are not exact matches, there are numerous input variables that determine the exact timing of the events. What is important is that the events are in the correct order and that the observed times are within the zero to extreme debris parameter range.)

The authors claim that the NIST final report provides "an elaborate scenario involving an unprecedented failure mechanism", meaning that the fires weakened the lateral support allowing for buckling. The authors claim that the NIST was able to arrive at this scenario only by omission or misrepresentation, citing a pseudo journal as support for this allegation. The authors then claim that the NIST computer model fails to replicate the collapse, showing a metal-frame model under a very odd angle. If you look at the video material for WTC 7 (e.g. here), you see that various fault lines are clear in the exterior of the structure when it collapses. The "large deformations to the exterior" are observable, especially if you show the video in slow motion. The thing is, the clearest deformations, buckling near the bottom of the structure, are not observed in the video because other buildings block the view.

The authors move on to consider the Twin towers. They claim that the definite report by NIST does not contain analysis of why the lower sections fail to slow the descent of the upper sections. They cite an engineering mechanics paper and the Questions and Answers (here) regarding WTC 1&2.  This strikes me as exceedingly odd, as the NIST NCSTAR 1-6 report concerned this topic. Here, "Insights gained from .. were used, in turn, to formulate and execute nonlinear, temperature-dependent finite element analyses of global structural systems to predict the collapse sequence of each tower. The structural analyses were guided, and where possible validated, by observations..". This quote is from the abstract. It seems that the authors are cherry picking a source that fits their story, rather than using the report concerning the topic under consideration. They also complain that NIST could not provide the full explanation because the computer models did not converge. What this means is that several models, each taking into account or detailing different aspects of a non-linear system, did not converge on a singular solution. What it does not mean is that they are all wrong; the combined systems provide you with a envelope inside which the observations fall. Non-linear systems are notoriously hard to work with, and often do not converge. A very limiting factor is the available computational power, which was far less when these investigations happened.

The authors then misrepresent partial sentences to make the NIST seem incapable. The authors make it seem that NIST cite a paper rather than not investigating the issue themselves. This condemnable misrepresentation by the authors is very revealing, in that they truly want to present a story despite the evidence. The quotes are found in section 9.4.4 "Comparison with other collapse hypotheses". It is in this setting that they cite the 2002 paper by Bazant and Zhou, and find it agrees with their hypothesis. They then cite the pseudo journal again, claiming "researchers" have found that the Bazant paper was wrong.

The authors wrap it up with overestimating the significance of "puffs of smoke", which sounds remarkably like dust expelled during the collapse from small structural fractures. They also wonder why the segment that fell into the interior of the building (e.g. the penthouse) is not visible within the rest of the videos. This is easily explained by their own explanation of how interior-first deterioration of support columns causes the building to fall inward. The authors continue their rant, for instance claiming that molten aluminum has a silvery appearance. While it is true that most metals in their liquid phase appear like metals, we also know of many metals that glow red hot at some point. The same is true for aluminum (e.g. video). As temperatures reached about a thousand degrees Celsius (NIST), an aluminum has a melting point of 660 degrees Celsius. At a thousand degrees, it will glow red hot - just like a light bulb. The mechanism is called black-body radiation (discovered by Planck, and part of a number of anomalies in 19th century physics that led to the discovery of quantum physics and the modern electronics of today).

The authors wrap it up with the following conclusion:
It bears repeating that fires have never caused the total collapse of a steel-framed high-rise before or since 9/11. Did we witness an unprecedented event three separate times on September 11, 2001? The NIST reports, which attempted to support that unlikely conclusion, fail to persuade a growing number of architects, engineers, and scientists. Instead, the evidence points overwhelmingly to the conclusion that all three buildings were destroyed by controlled demolition. Given the far-reaching implications, it is morally imperative that this hypothesis be the subject of a truly scientific and impartial investigation by responsible authorities. n
I would like to provide my own version:

It bears repeating that fires have caused the total collapse of multi-story buildings before and since 9/11. Did we witness an unprecedented event on september 11, 2001? Yes, planes flying into buildings. The NIST reports, which support the likely conclusion that fires burn and melt, persuade a growing number of experts and laymen. The evidence points overwhelmingly to the conclusion that all three buildings were destroyed because, turns out, fire is hot. Given the far-reaching implications, it is morally imperative by utility, categorical imperative and scientific values that hypotheses continue to be tested in truly scientific and impartial investigations, as the NIST has done.

More comments

The authors have quite clearly been shown to misrepresent and cherry pick the NIST report fragments they present, in order to cling to a preordained conclusion even after it has been considered and subsequently demonstrated to be unpalatable. Amusingly, this is exactly what the accuse the NIST of.

There are a number of things that the conspiracy theorist try to use as if it is an enchanted sword granted to them by a naked lady living in a lake. For instance, that multiple news channels reported the collapse of WTC 7 too early. This is rather simply explained; the NYFD explained to reporters that WTC 7 would collapse, they misinterpreted and thought it had already happened.

To the question of "what other buildings do you know that collapsed only by fire", I saw someone respond with this video. That's the Faculty building of Architecture of Delft University of Technology. It burned down about a year before I entered.  As you can see, the building also nicely falls towards the interior. Here, the collapse doesn't continue because the upper part of the building is not sufficiently heavy.


Sorry, conspiracists. You still have no evidence that the plot of Assassin's Creed (the part involving Abstergo) is real.















A tale of selection

Introduction

While reading my delightful new book - which I will name at the end - I realised how only a small part of people realises all implications of natural selection.

It was Charles Darwin, in the Origin of species (By means of natural selection) that first postulated an explanation of evolution by a one mechanism of heritability and another of variability. We know both mechanisms; they are in our DNA. DNA is the mechanism of heritability, and copying errors and sexual reshuffling are the mechanism of variation.

Not too long ago, breeding was invented. By this I do not mean merely the breeding of livestock, but the deliberate selection of offspring so as to produce some sort of phenotypical change. The result are the many 'races' of dogs and cats we are familiar with. 

The first chapter of the Origin is `variation under domestication', and is essentially about breeding or artificial selection. From there, Charles Darwin leaps onto 'variation under Nature' and natural selection as a mechanism. The rest of the book is essentially an extremely satisfying list of examples, by leading naturalist (expert in natural history) of the day. Anybody that claims there is no evidence for evolution should read the Origin.

Variation under Domestication

Leaf cutter Ants (source).

As pointed out in the new book I am reading, domestication isn't necessarily limited to humanity. Indeed, many species of ant practise agriculture and husbandry in their nests, with the prime example of the leaf-cutter ant. The leaf-cutter ant cuts leaves to feed to their fungi farms. The image to the right came with a description: "What a wonderful example of symbiotic relationships.". I disagree; this is a wonderful example of domestication, of farming in the wild. 

"Nevertheless", that author might say, "these ants have also radically changed their habits and are very dependent on the fungi farms. Truly, both species have adapted to their cooperation and the correct description is a symbiotic relationship." I might even grant that. Indeed, I would respond, we are in a symbiotic relationship with cattle and crops. 

"What nonsense is this", my imaginary friend says, "we have domesticated them, not the other way around". Perhaps, I might say. From their perspective, of course, they have a slave species that provides sustenance (fertiliser, animal feed), protection (pesticides, fences, pastures) and so forth. And what do you think of the evolution of lactose-tolerance? "Even so", she says, " we eat them; we take their young, we determine who reproduces. Clearly, we are dominant and I conclude they are domesticated by us". I agree, I add; that is how humans define domestication. And thus, I conclude leaf-cutter ants have domesticated fungi. "Oh, shut up", she adds, "and make me a sandwich". 

Seen in this light, many more examples of domestication suddenly pop into view. For instance, grass is domesticated by grazing animals. This might seem odd, until you realise that grass does better in the presence of these grazing animals. From the grass its point of view, cattle provide the service of killing off competitors, such as saplings. From the cattle's point of view, there is a plant that grows leaves sufficiently fast for grazing. And as time goes by, both reinforce this relationship. But, as we just states, the cows eat the grass and can eat the seeds of grass. The cows are dominant, so they have domesticated grass. Now, isn't that a thought?

Agricultural Revolution

We all should know how it works. From a herd of cows, we breed only the female cows that give a large amount of milk. From their offspring, we select the bulls and again the high-producing females. And so we continue; as the generations go by, our cows are better at giving milk than they were. This is a selection of natural variation in milk-producing capacity that is artificially selected.

Ever since we got a clear picture of that process, we've been breeding. But even before that, our species or genus was a selective pressure for many other species. Consider a nomadic tribe, in a landscape with several other nomadic tribes. Our nomadic tribe is peaceful, and they agree on their tribal lands with the other tribes. Over the years, the tribe develops habits. In the winter, it survives on the edge of a lake, spending the time with fishing and hunting animals coming for water. In the spring, it moves out to the plains, where herds of grazing animals are now breaking up and forming smaller, huntable packs. Then, as summer rises, the plain becomes too hot and the tribe moves up the mountains, past the mountain stream. Here, there is relatively easy hunting in comfort. As autumn comes, the tribe moves back to the plains, to catch the migration of the grazing animals, which move from fertile grounds to wherever they spend the winter.

As the tribe makes it rounds, they gather some plants in the area. A vine with berries is plucked, but the inedible seeds are deposited all around the camp. Certain roots are picked, the thickest grass seeds are eaten, berries plucked. Around the camp, the inedible parts and leftovers are deposited. The tribe moves on. In the cleared space that was the camp, the inedible seeds and leftover roots start to grow again.

The next year, the next cycle. The tribe comes into the area, and finds that the camp of their past year is now full of plants. So, they set up camp close by and set to gathering and hunting. As they walk into their previous camp, they find it is full of edible plants. They gather it, and inedible seeds and leftovers are once again spread around the camp. Each cycle signifies another round of unconscious selection for the lushest berries, the easily found root, full of nutrients and so forth. While otherwise not beneficial for the plant, the rewards are many. Their offspring grows without much competition, is sure to be spatially transported. The new grounds are usually full of nutrients - after all, a tribe of humans just left that area. 

And so, at one point, the previous camp is filled with grapes, potatoes, elderberry, cabbage and wheat. Not wholly the varieties we have today, but the first domesticated version. Who knows how the first farm started? Perhaps a young family is left behind because their expected birthing is during the moment the tribe must get to the grazing migration. The tribes leaves them, comes back and finds that the (now larger) family is still there, happily living off the plants of the past camps with a little hunting on the side. Soon, the entire tribe spends part of the cycle farming. And, as they accidentally select the crops, spend a larger part of the cycle farming. Until one day, one tribe of farmers throws a rock at another tribe of farmers and we call it civilisation.

Selection

I'm not sure, but I think the inception of agriculture was before that of animal husbandry. Perhaps a sickly horse or a lost foal hung around the easily-gathered foodstuffs of the early farms. Maybe a kindly disposed human, adult or not, nursed such a foundling to adulthood or healthiness. Either way, animals were domesticated as well.

The capacity of a farm is limited. Aas the animals multiply, you must at some point reduce the herd. Naturally, you choose those animals that are least productive. The skinny bulls, the cows that do not produce the least milk. And lo', you have practised artificial selection. As this continues, presumably simply because of herd size versus farm capacity, each farm prunes the weak and least productive. Sometimes, no doubt, farms traded or gifted part of their livestock. Even so, the domesticated cattle was unconsciously selected towards greater production of human foodstuffs. 

As time went on, humans became more and more dependent on agriculture and animal husbandry. The selection of the humans is that the bad farmers died out. So, the human species progressed towards better farmers. In that sense, we are domesticated by our livestock and crops.

I don't know how breeding came to be. I imagine it was something of a reward. This meek stallion that worked so well in my field, he must have time with the mare. For humankind, sex is usually seen as a reward. And with our anthropocentric thinking, the same must be true for our livestock. In this way we started to artificially select and breed our livestock. 

The results are out there. At some point, breeding made it into the halls of aristocracy.  Usually, this can be a bad influence at first. For instance, in 'Daughter of the Empire' by Raymond Feist, there is this amusing passage between the (new) Lady of the House and her highest clerk. The clerk expresses his confusion that many Houses choose their breeding bulls by looking at certain characteristics that reflect the masculinity of the Lord of the House. The Lady replies that, as she doesn't require reflections of her masculinity, but that he should just breed as he thinks is best. 

Even so, the reward-system still holds. And, as feudal strongholds grew, so did their kennels. And dog-breeding started. Eventually, breeding turned into a white-collar hobby. At some point, the question of the transmutation of species arose. And Charles Darwin, a young naturalist, went with the HMS Beagle - already called for an artificially selected breed of dog. And soon, most of the western world knew the principles behind breeding.

Directed Evolution

With that, far stronger breeding methods were developed. Eventually, in the early 20th century, we started to mess with the mechanism of variation. Various things can lead to mutation, and the breeding method called mutagenesis was born.

By whatever method, be it the addition of mutagens or ionizing radiation, scientists were able to directly affect the variation in offspring. Take a large amount of seeds, treat them and see what variation this led to. What improvement there is; the rates of mutation changed so very drastically. 

Mind you, it's a real lottery. You have little idea of what you're doing; you are just greatly increasing (perhaps by orders of magnitude) the variability of the resulting offspring and then add regular breeding to get those varieties you were looking for.

Nowadays, a strong movement exists that tells us to "Eat organic" and "Natural". Surely, they would disagree with this practise of radical variation? No, they don't. Instead, mutagenesis is a part of the accepted breeding methods for Organic Agriculture. 

Directed Mutation

Suppose you are a young scientist doing genetic research on plants. Specifically, you are looking at the problems surrounding caterpillar damage to soybean crops.

You know that, for example, a specific plant in Northern Europe is especially resistant to caterpillars. This, you find, is because it makes a specific protein that is inedible to caterpillars, and even detected by butterflies and makes them less eager to put their offspring on the plant. The gene sequence is quite simply, really. Let's call it the NOCATERPIE gene sequence. You also find that the NOCATERPIE protein acts through a mechanism that is not even present in humans, and is thus completely safe to eat.

After ten years of studying genetics and its mechanisms, you have learned that is in principle possible to insert a new gene sequence and have it synthesised by the cellular machinery. During your studies you have seen the methods (enzymes/proteases/nucleases) used by cells to cut, read and edit the DNA code. You devise a clever scheme to insert the NOCATERPIE gene sequence into the soybean genome.

In the lab, in a carefully quarantined room, you grow the new plant(s). You test if the protein is present, do PCR to confirm the changes made to the genome. You test the plants within quarantine for caterpillar damage. Forward a few years; after many tests, you have determined the plant is exactly what you aimed for. After passing an extensive review by not only your university, but also the country's ethical research committees, you are allowed to take the plant to the university greenhouse. Again, you test extensively and confirm your expectations. Finally, you are allowed to do a field test and find you've created a soybean variety that is resistant to caterpillars.

That's called genetic engineering. It is the most recent addition to our breeding efforts. The exceedingly clear name in the new book I am reading is: Genetically Engineered Mutations. The author is completely correct in that view; all that is done is genetically engineering a mutation. These mutations, however unlikely, can spontaneously happen in nature. 

Sure, it is a very direct and guided way of going about variation. Even so, there is nothing ethically wrong with it. The precautionary principle applies just as strongly as it does for 'mere' natural variation, mutagenesis, cross breeding/fertilisation, and so forth. Locations are not an ethical consideration; that it was done in a lab does not make a sufficient justification for ethical condemnation.

The book, by the way, is The Ancestor's Tale  and is written by Dr. Richard Dawkins.