posted on Oct, 19 2007 @ 03:07 PM
I hope the MODs don't mind (if so please delete) I am posting a reponse I reveived from Mr. MAckey with a question I presented to him that I often
hear and wanted a somewhat laymen answer. I did post this question on another thread in here becasue of the on going coversation there.
My E-mail to Mr. Mackey:
I have one question for you if you don't mind.? I often here this
argument:
>
> "The computer models stopped at collaspe initiation because of the
ridiculous variables that they had to put into the computer to start it.
> And because the collapse looked absolutely nothing like what we saw on
9/11, in that it was not symmetric and they could not get it to
> progress!!! "
>
> Would you be able to explain to me in laymen terms how I can reposnd to
this?
His response:
That argument is nonsense. The computer models stopped at collapse
initiation (and sometimes before!) because of what's called a "convergence
problem." It has nothing to do with a need for unrealistic initial
conditions or because it would give a politically incorrect answer.
NIST ran two different major structural models. These models do different
things. The one for the structure, contained in NCSTAR1-6D but also
baselined in NCSTAR1-2A is run by a program called SAP2000. The other one,
considering the dynamics of the aircraft impacts, was run in LS-DYNA.
SAP2000 is a structural model that is essentially solves the static load
problem. It cannot represent moving objects (although there may be some
workarounds, but nothing accurate on this scale). Basically the way it
works is by solving the stress-strain relationship for each element, which
is a simple equation, using a look-up table for the material properties as
a function of strain.
The reason it's difficult is because that simple stress-strain
relationship becomes an ENORMOUS matrix problem. Each piece of the
structure has several variables, representing position and strain (think
"stretch") in several dimensions. The load is a mixture of fixed boundary
conditions, like area loads on floors, and "self-weights" of the components
themselves. The elements are coupled to other elements to varying degrees.
A simple way to think about it is as follows: Start with the structure as
originally built. Then apply the load. The load creates stresses in each
component, and all of these have to balance. Once you have this, the
stresses lead to strains, and the structure sags a little bit as a result.
That changes the stress distribution, so you solve again. That changes the
strain. And so on.
At every step, you perform a calculation that is essentially a matrix
inversion. The matrices here, by the way, are incredibly large -- for the
WTC cases, they are literally bigger than a million times a million.
Matrices cannot always be inverted. If, for instance, there is a row of
all zeroes, a matrix is said to be "degenerate," and it cannot be inverted.
Inverting such a matrix is logically equivalent to dividing by zero. This
is, for instance, what would happen if you tried to include a completely
detached piece in the SAP2000 model.
In NIST's calculations, the matrices don't ever actually reach degeneracy,
they get awfully close -- an element on the diagonal that is very, very
small (i.e. close to zero) results in an "ill-posed" matrix. Inverting
this matrix is like dividing by a very small number, i.e. multiplying by a
very large number, and thus the outcome is not very stable. A small error
in this number -- even a roundoff error -- can lead to large changes in the
final result.
The more stable a structure is, and the less it deflects under load, the
easier it is to solve. As the WTC models got closer and closer to
instability, the harder it was to solve. Eventually the simulation simply
cannot proceed, due to the "convergence problem" I mentioned above. Either
the matrix inversion step gives unrealistic answers, or it results in such
a large change compared to the last step that it overshoots each time we
try to refine our result, and thus we get no single-valued answer.
This happens in real life, too. Think of a single column, near to its
buckling load, supporting a structure above. Which way will it bend?
Either way is equally energetic. As it gets closer to failure, the error
in our calculation becomes more and more significant.
Now in terms of actually modeling the collapse itself, this is much, much
worse. The situation above is still static, i.e. not moving, at least not
very fast, and we are still going to hit convergence limits. But now we
want to go even beyond that and consider a dynamic situation.
SAP2000 cannot do this. Instead, we could use a tool like LS-DYNA, which
doesn't just handle the stress-strain relationships, but also considers
kinetics -- motion, impulse, and much more focus on timestepping. Very,
very small timesteps.
We could, in theory, model the collapse in LS-DYNA. But the modeling
problem is vastly more complicated than it was before. First, we have to
decide what the actual state of the components is at the instant of
collapse, and even small uncertainties here will result in large
uncertainties in the final results. Second, we have to go through the same
process above, but now we have to do it at every timestep, so perhaps a
million times as many calculations as before. Third, we have far more
variables than before -- instead of just XYZ and the strain values for each
member, we now also have speed, adding six more degrees of freedom (three
translational and three rotational). Fourth, every time two objects
contact each other, exactly how force is transmitted is extremely sensitive
to the exact geometry. Think of all the various ways a bowling pin can
fall, and that's contact between loose, rounded objects. The variety of
objects in the WTC collapse -- shape, strength, etc. -- will be vastly
greater.
The complexity of the aircraft impact models was limited by these
performance constraints. That's why the aircraft was simplified and why
the results are open to some interpretation. Modeling the structure
collapse in similar fashion would be hundreds of times worse, or
(alternately) hundreds of times more coarse.
There's no point to doing this. What we really need is a gross-order
understanding of behavior. This is provided by models such as the one in
Bazant, Le, Benson, and Greening. Similarly, the NIST impact models don't
really care about the exact disposition of every fragment of aircraft, but
only things like the total momentum transmitted to the structure, the rough
order distribution of fuel, and the expected loads on the large core
columns. This is about the limit of detail that we can reasonably
calculate.
I've made the argument many times that dynamic models are just not that
precise. If we could accurately model the entire WTC collapses, then I
should be able to take that same model to Las Vegas, go to the craps
tables, and make a billion dollars before the bar closes. Obviously, I
can't. Even the most sophisticated models cannot accurately predict what
side of a six-sided die will come up when thrown. There is no reason to
expect billions of times higher precision from NIST.
Hope that answers your question.
Thanks,
Ryan Macke