can be regarded as the effective force vector,
equivalent with the prescribed displacements.
In subsequent iterations, the iterative increments of the prescribed
displacements are zero and hence the effective force vector vanishes.
A similar effective force vector can be generated in case of influence
of time on the analysis e.g. prescribed temperature increments or
viscoelastic material behavior.
In this case, the effective force vector contains the effect on the
internal force vector during the time increment if the displacements
remain constant.
The addition of this effective force vector in the first prediction
(zero iteration) will improve the convergence of the iteration process
significantly.
In subsequent iterations, the time does not change anymore and also
this effective load vector will vanish.
In reallife analysis, the loading does not have to be restricted to
load control, displacement control or time increments, but they can
be combined in any way.
In that case the `real' external load and the effective force vectors
from prescribed displacement increments and time influences must
be used together.
31.1.5.2 Arclength Control
In an ordinary iteration process the predictions for the displacement
increments can become very large.
This is the case especially if the loaddisplacement curve is almost
horizontal.
If a fixed load increment is prescribed, this results in very large
predictions for the displacements.
The problem can be overcome with the use of an Arclength method.
Using the Arclength method the snapthrough behavior of
Figure 31.10:
Arclength control

Figure 31.10a^{31.1}can be analyzed, just as displacement control could.
Here, however, it is possible to define a system of loads that could not
be substituted by prescribed displacements.
Moreover, the Arclength method is also capable of passing
snapback behavior [Fig.31.10b],
where displacement control fails.
The Arclength method constrains the norm of the incremental displacements to
a prescribed value.
This is done by simultaneously adapting the size of the increment.
Note that the size is adapted within the iteration process and is not
fixed at the moment the increment starts.
For this purpose we define the external force vector at the start of the
increment as
^{t}f_{ext}
and the increment of the
external force vector as
.
The load factor
multiplies a unit load
and can change every iteration.
Substitution in (31.7) results in
u_{i} = K_{i}^{1} + ^{t}f_{int}  f_{int, i} 
(31.22) 
The solution
u_{i}
is now split in two parts:
u_{i}^{I} = K_{i}^{1}^{t}f_{int}  f_{int, i} and u_{i}^{II} = K_{i}^{1} 
(31.23) 
The total iterative increment is then derived from
u_{i} = u_{i}^{I} + u_{i}^{II} 
(31.24) 
The load factor
is still undefined and can now be
used to constrain the incremental displacement vector.
DIANA offers a quadratic and a linearized constraint,
leading to the Spherical Path Arclength method
and the Updated Normal Plane method,
see Crisfield [17, §9.3].
Spherical Path.
In the spherical constraint, the constraint equation is
u_{i}^{T} u_{i} = l^{2} 
(31.25) 
where l
is the required arc length.
Substitution of (31.6) and (31.24)
in (31.25) gives the value for
with
a_{1} 
= (u_{i}^{II})^{T}u_{i}^{II} 

a_{2} 
= 2(u_{i}^{I})^{T}u_{i}^{II} +2(u)^{T}u_{i}^{II} 

a_{3} 
= 2(u)^{T}u_{i}^{I} + (u_{i}^{I})^{T} u_{i}^{I} + (u)^{T} u  l^{2} 

Normally, two solutions for
fulfill (31.25) but
if the discriminant
a_{2}^{2} 4a_{1}a_{3} < 0
then
DIANA uses a linearized equivalent of the
Spherical Path method as described by Forde and Stiemer [23].
To determine which of the two regular solutions should be used,
the angle
between the displacement increment vector of the previous
iteration and the current iteration is calculated for both solutions
cos = 
(31.28) 
If one of the solutions yields a negative cosine and the other a positive,
DIANA chooses the solution with the positive cosine (acute angle).
If both solutions yield acute angles, the solution closest to the
linear solution
=  a_{3}/a_{2}
is used.
Updated Normal Plane.
The second constraint is a linearized constraint.
If (31.25) is matched for
u_{i1}
, then
the constraint equation for
u_{i} = u_{i1} + u_{i}
can be written as
(u_{i1})^{T} u_{i} = 0 
(31.29) 
where the quadratic term in
u_{i}
is ignored.
Substituting (31.24) into (31.29) leads to
the expression for
Geometrically this constraint means that the iterative increment must
be perpendicular to the total increment at the previous iteration.
The solution is projected on the plane, normal to the previous solution,
hence the method is referred to as
the Updated Normal Plane method.
Indirect Displacement Control.
In the previous description of the constraint equations all displacements
were gathered together.
For global nonlinear behavior this is adequate, but for local collapse
mechanisms the method can perform better if only a part of the
displacements is considered.
The constraint equations can remain the same as in (31.25)
and (31.29), but instead of using the vectors
u
and
u
vectors
v
and
v
are considered, defined by
In the extreme case that only one item in
v
is nonzero,
the arc length is defined as the displacement of the corresponding
degree of freedom.
A constant arc length during the analysis will result in this case in
equal displacement increments for this degree of freedom.
Because the loading is defined as an external force, this type of control
is called Indirect Displacement control.
A variant of Indirect Displacement control is
Crack Mouth Opening Displacement control,
usually called CMOD.
This can be used, just as in experiments, to control the increase
in crack width et cetera.
In case of CMOD control,
a vector is formed with new `degrees of freedom'
that can e.g. represent the difference in displacements on opposite nodes
on a crack plane.
This vector is used in the constraint equations (31.25) or
(31.29).
As long as the displacement increments per step remain relatively small, the
difference between the Spherical Path method
and the Updated Normal Plane method are small.
More important than the choice between these two methods is the choice of
the value for the arc length l
.
The available Arclength methods are particularly useful if they are combined
with adaptive load incrementation, as described in the next section.
31.1.5.3 Adaptive Loading and Time Increments
Up to here we have used the initial load, displacement or time increment
as a fixed value.
In an analysis, we could e.g. reach a load level of 100N, by
defining ten increments of 10N.
In combination with Arclength control, the size of the increment can change
inside an increment, but the start value was still fixed.
The size of the increments is limited by the physical behavior
in case of path dependency and by the convergence characteristics of
the selected iteration process.
Especially in the latter case, the allowable step size depends on the
amount of nonlinearity in the increment.
This is usually not known a priori,
i.e., at the moment that the analysis starts, and
therefore the optimum increment sizes cannot be fixed beforehand.
To allow for result dependent increment
sizes `adaptive loading' can be used.
An even more important question that can usually not be answered before
the analysis is started is whether at a certain load level the load must
increase further or must decrease, e.g. in case of a snapthrough.
Three adaptive loading methods are implemented in DIANA,
an iteration based method for all types of loading,
an energy based method that can only be used in combination
with Arclength control [§31.1.5.2],
and the cutback based automatic incremental loading method
[§31.1.5.4].
For the first two methods,
two algorithms are available to decide whether the next step must
be an increment or a decrement.
The cutback based automatic incremental loading method does not allow for
loadingunloading switching but offers the possibility for errorcontroled
time increments if the physical behavior explicitly depends on
the rateofchange of the solution.
Iteration Based.
In the iteration based method, the experience is used that for many
analyses, the iteration process converges faster if the increment size
is smaller.
Based on a `desired number of iterations' the new increment size
can be made larger than the previous if the actual number of iterations
in the previous increment was smaller than
the desired number and vice versa.
The size of the new load increment
^{t+t}
is
calculated by
Here
^{t}l
is the length of the predictor displacements
of the previous step.
Further
N^{d}
is the desired number of iterations and
^{t}N
is the actual number of iterations
in the previous increment.
The power
is usually set equal to 0.5 .
The iteration based method is also very useful to pass
sharp snapthroughs
or softening behavior in crack propagation analyses.
When parameter
is set equal to zero, a constant arc length is applied
throughout the whole analysis.
Experience shows that this method is stable in case of softening behavior.
Since
^{t+t} = and ^{t+t}l = ^{t+t} 
(31.34) 
it follows that the length of the incremental displacement vector
remains constant.
Energy Based.
The energy
based method can only be used in combination with the
Arclength method.
This method calculates a load increment such that the vector product
of the load increment and the displacement increment in the first prediction
equals the vector product of the final load increment and displacement
increment of the previous step [Fig.31.11].
Here
^{t}W
indicates a kind of final energy increment of step 1 and
^{t+t}
a first prediction thereof in step 2.
Figure 31.11:
Work increment

The new loading factor is derived from
where index n
indicates the last iteration (of the previous increment).
In order to get a proper initial value for
the vectors
u_{0}^{I}
and
u_{0}^{II}
must be calculated
with a tangential stiffness matrix.
LoadingUnloading.
A simple way to choose between increments or decrements is based on the
appearance of negative pivots in the global system of equations.
Often a negative pivot indicates unstable structural behavior that is
related with some type of snapthrough.
If this occurs the sign of the load increment must be changed from
loading to unloading.
Another method, proposed by Crisfield, can only be used in combination
with the Arclength methods and is similar to the method used
in the Spherical Path Arclength method.
The angle between the new prediction and the previous increment should be
acute, so
In many occasions, both loading and unloading criteria will yield the
same result.
In case of multiple equilibrium paths (bifurcations)
the methods may differ and one should be aware that `an' equilibrium path is
followed and not necessarily a stable equilibrium path.
31.1.5.4 Cutback Based Automatic Incremental Loading
Cutback based automatic load stepping is a simple tool for adaptive load
increments.
Given a final loading,
the automatic load step controller tries to take as few load steps as possible
and at the same time tries to limit the number of steps
in the iterative procedure.
The main advantage over the iterations based load controller is that
the automatic load controller recovers from nonconvergence in the
iterative solver.
An outline of the algorithm is as follows.
^{t+t} x minsiz ^{t+t} ^{t+t} x maxsiz 
(31.37) 
First the full loading is applied in a single step.
If the iterative procedure fails to converge,
the load step is decreased by a factor cutb [§12.3.2.4]
and the calculation is restarted.
If, after successive failures,
the load step becomes smaller than a userspecified part
of the full loading,
(the MINSIZ parameter [§12.3.2.4])
the automatic load controller gives up and issues an error message.
On the other hand,
if the iterative procedure converges fast with respect to the maximum
number of iterations
(userspecified parameter MAXITE [§12.3.5]),
the step size is increased.
Optionally, the maximum load step size can be limited to a user specified part
of the full loading
(the MAXSIZ parameter [§12.3.2.4]),
for example if some intermediate results are needed for output purposes.
Adaptive error controlled time increments.
If the physical behavior of the model explicitly depends on the
rateofchange of the displacements or an internal variable
(for instance dynamics, viscous behavior),
then the computed solution at a certain time depends on the time
step sizes used.
If very small time increments are taken, this influence is negligible.
However, a priori it is not clear how small the step size should be.
Moreover, it is often unattractive to use a fixed time step size.
Clearly, one would like to use small time steps if there are
rapid changes in the system,
and to increase the step size if the system slowly relaxes to an
equilibrium state.
To explain the adaptive time step strategy in DIANA we
consider the initial value problem
= f(u) 
(31.38) 
Suppose that we use a time stepping method to advance the solution from
t_{n}
to t_{n+1}
with a step size
t = t_{n+1}  t_{n}
.
By doing so we introduce an error due to the finite
time step size t
.
Let
u(t)
be the solution to (31.38)
with initial condition
u(t_{n}) = u_{n}
.
The time step error is then
u(t_{n+1})  u_{n+1} = (t^{m+1}) 
(31.39) 
with m
the order of consistency of the method.
Suppose that we also have a less accurate method,
with order of consistency
(with
< m
).
In DIANA a second order RungeKutta method is
available, i.e., the SDIRK2 method [§33.4.5],
that has an embedded first order method.
So we have a second order accurate solution
u_{n+1}
,
and a first order solution
.
We want the error to be less than a prescribed error tolerance
u_{n+1}  u(t_{n+1})  u_{n+1} x u_{n+1} + 
(31.40) 
This criterion is both used to reject time steps and
estimate the next time step:^{31.2}
t_{n+1} = t_{n} x 
(31.41) 
Some extra logic is introduced to avoid unnecessary step rejections,
and to keep the step size constant to save LUfactorizations.
Moreover, care is taken not to overstep changes in the loading.
Automatic time stepping without the SDIRK2 time stepping method is very much like
automatic load stepping [§31.1.5.4].
Conversely, if SDIRK2 is used while
there is no explicit rate dependence in the model,
then a time accurate solution is produced.
However, if you are not interested in intermediate results
it is better, for efficiency reasons, not to use SDIRK2.
Next: 31.2 Geometric Nonlinearity
Up: 31.1 IncrementalIterative Solution
Previous: 31.1.4 Convergence Criteria
Contents
Index
DIANA9.4.4 User's Manual  Analysis Procedures
First ed.
Copyright (c) 2012 by TNO DIANA BV.