next up previous contents index
Next: 31.2 Geometric Nonlinearity Up: 31.1 Incremental-Iterative Solution Previous: 31.1.4 Convergence Criteria   Contents   Index


31.1.5 Incremental Procedures

The incremental-iterative solution procedure consist of two parts: the increment part and the iteration part. The iteration part was discussed in §31.1.1, in this section the increment part is treated. We first describe the most simple types: load control and displacement control. Then the Arc-length method is discussed, a method that can adapt the step size depending on the results in the current step. The initial choice of the step size for every increment is an important factor in the incremental-iterative process. Therefore, two methods are presented to determine step sizes and two methods to choose between loading and unloading depending on the previous analysis results.

Finally, we present the Automatic Incremental Loading method [§]. This method requires only the final loading to be specified. From this, DIANA automatically determines the intermediate step size. Load and Displacement Control

In §31.1.1 we have presented iteration processes where the external load was increased at the start of the increment, by directly increasing the external force vector fext . This is usually called `load control' [Fig.31.9a]. Another way to put an external load on a structure is to prescribe certain displacements uc . This is called `displacement control' [Fig.31.9b].
Figure 31.9: Load and displacement control
... 3.780in
\centerline{\raise 4.1cm\box\graph}
In case of displacement control the external force vector is not increased directly. To get a proper first prediction of the displacements, the prescribed displacements must be incorporated in the external force vector. This effective force can be calculated by rewriting (31.7) and splitting the displacement increment vector in two parts: one referring to the unconstrained and an other referring to the constrained displacements, respectively $ \Delta$uu and $ \Delta$uc . The stiffness matrix and force vector are split likewise:

$\displaystyle \left[\vphantom{ \begin{array}{cc} \mathbf{K}^{\mathrm{uu}} & \ma...
... [1ex] \mathbf{K}^{\mathrm{cu}} & \mathbf{K}^{\mathrm{cc}} \end{array} }\right.$$\displaystyle \begin{array}{cc} \mathbf{K}^{\mathrm{uu}} & \mathbf{K}^{\mathrm{uc}} \\  [1ex] \mathbf{K}^{\mathrm{cu}} & \mathbf{K}^{\mathrm{cc}} \end{array}$$\displaystyle \left.\vphantom{ \begin{array}{cc} \mathbf{K}^{\mathrm{uu}} & \ma...
...athbf{K}^{\mathrm{cu}} & \mathbf{K}^{\mathrm{cc}} \end{array} }\right]_{{0}}^{}$$\displaystyle \left\{\vphantom{ \begin{array}{c} \Delta\mathbf{u}^{\mathrm{u}} \\  [1ex] \Delta\mathbf{u}^{\mathrm{c}} \end{array} }\right.$$\displaystyle \begin{array}{c} \Delta\mathbf{u}^{\mathrm{u}} \\  [1ex] \Delta\mathbf{u}^{\mathrm{c}} \end{array}$$\displaystyle \left.\vphantom{ \begin{array}{c} \Delta\mathbf{u}^{\mathrm{u}} \\  [1ex] \Delta\mathbf{u}^{\mathrm{c}} \end{array} }\right\}_{{0}}^{}$ = $\displaystyle \left\{\vphantom{ \begin{array}{c} \mathbf{g}^{\mathrm{u}} \\  [1ex] \mathbf{g}^{\mathrm{c}} \end{array} }\right.$$\displaystyle \begin{array}{c} \mathbf{g}^{\mathrm{u}} \\  [1ex] \mathbf{g}^{\mathrm{c}} \end{array}$$\displaystyle \left.\vphantom{ \begin{array}{c} \mathbf{g}^{\mathrm{u}} \\  [1ex] \mathbf{g}^{\mathrm{c}} \end{array} }\right\}_{{0}}^{}$ (31.20)

The unknown displacement increments $ \Delta$uu can be calculated from the first row in (31.20) as

$\displaystyle \Delta$u0u = $\displaystyle \left(\vphantom{ \mathbf{K}^{\mathrm{uu}} }\right.$Kuu$\displaystyle \left.\vphantom{ \mathbf{K}^{\mathrm{uu}} }\right)^{{-1}}_{}$$\displaystyle \left\{\vphantom{ - \mathbf{K}_{0}^{\mathrm{uc}} \Delta\mathbf{\mathrm{u}}^{\mathrm{c}} + \mathbf{g}_{0}^{\mathrm{u}} }\right.$ - K0uc$\displaystyle \Delta$uc + g0u$\displaystyle \left.\vphantom{ - \mathbf{K}_{0}^{\mathrm{uc}} \Delta\mathbf{\mathrm{u}}^{\mathrm{c}} + \mathbf{g}_{0}^{\mathrm{u}} }\right\}$ (31.21)

Comparing (31.7) and (31.21) indicates that - K0uc$ \Delta$uc can be regarded as the effective force vector, equivalent with the prescribed displacements. In subsequent iterations, the iterative increments of the prescribed displacements are zero and hence the effective force vector vanishes.

A similar effective force vector can be generated in case of influence of time on the analysis e.g. prescribed temperature increments or viscoelastic material behavior. In this case, the effective force vector contains the effect on the internal force vector during the time increment if the displacements remain constant. The addition of this effective force vector in the first prediction (zero iteration) will improve the convergence of the iteration process significantly. In subsequent iterations, the time does not change anymore and also this effective load vector will vanish.

In real-life analysis, the loading does not have to be restricted to load control, displacement control or time increments, but they can be combined in any way. In that case the `real' external load and the effective force vectors from prescribed displacement increments and time influences must be used together. Arc-length Control

In an ordinary iteration process the predictions for the displacement increments can become very large. This is the case especially if the load-displacement curve is almost horizontal. If a fixed load increment is prescribed, this results in very large predictions for the displacements. The problem can be overcome with the use of an Arc-length method. Using the Arc-length method the snap-through behavior of
Figure 31.10: Arc-length control
... 4.667in
\centerline{\raise 4.2cm\box\graph}
Figure 31.10a31.1can be analyzed, just as displacement control could. Here, however, it is possible to define a system of loads that could not be substituted by prescribed displacements. Moreover, the Arc-length method is also capable of passing snap-back behavior [Fig.31.10b], where displacement control fails.

The Arc-length method constrains the norm of the incremental displacements to a prescribed value. This is done by simultaneously adapting the size of the increment. Note that the size is adapted within the iteration process and is not fixed at the moment the increment starts. For this purpose we define the external force vector at the start of the increment as tfext and the increment of the external force vector as $ \Delta$$ \lambda_{{i}}^{}$ $ \hat{{\mathbf{f}}}$ . The load factor $ \Delta$$ \lambda_{{{i}}}^{{}}$ multiplies a unit load $ \hat{{\mathbf{f}}}$ and can change every iteration. Substitution in (31.7) results in

$\displaystyle \delta$ui = Ki-1$\displaystyle \left(\vphantom{ {\Delta\lambda}_{i} \: \hat{\mathbf{f}} + \: ^{t}\mathbf{f}_{\mathrm{int}} - \mathbf{f}_{\mathrm{int},i} }\right.$$\displaystyle \Delta$$\displaystyle \lambda_{{{i}}}^{{}}$ $\displaystyle \hat{{\mathbf{f}}}$ +  tfint - fint, i$\displaystyle \left.\vphantom{ {\Delta\lambda}_{i} \: \hat{\mathbf{f}} + \: ^{t}\mathbf{f}_{\mathrm{int}} - \mathbf{f}_{\mathrm{int},i} }\right)$ (31.22)

The solution $ \delta$ui is now split in two parts:

$\displaystyle \delta$uiI = Ki-1$\displaystyle \left(\vphantom{ ^{t}\mathbf{f}_{\mathrm{int}} - \mathbf{f}_{\mathrm{int},i} }\right.$tfint - fint, i$\displaystyle \left.\vphantom{ ^{t}\mathbf{f}_{\mathrm{int}} - \mathbf{f}_{\mathrm{int},i} }\right)$        and        $\displaystyle \delta$uiII = Ki-1 $\displaystyle \hat{{\mathbf{f}}}$ (31.23)

The total iterative increment is then derived from

$\displaystyle \delta$ui = $\displaystyle \delta$uiI + $\displaystyle \Delta$$\displaystyle \lambda_{{{i}}}^{{}}$ $\displaystyle \delta$uiII (31.24)

The load factor $ \Delta$$ \lambda_{{{i}}}^{{}}$ is still undefined and can now be used to constrain the incremental displacement vector. DIANA offers a quadratic and a linearized constraint, leading to the Spherical Path Arc-length method and the Updated Normal Plane method, see Crisfield [17, §9.3].

Spherical Path.

In the spherical constraint, the constraint equation is

$\displaystyle \Delta$uiT $\displaystyle \Delta$ui = $\displaystyle \Delta$l2 (31.25)

where $ \Delta$l is the required arc length. Substitution of (31.6) and (31.24) in (31.25) gives the value for $ \Delta$$ \lambda$

$\displaystyle \Delta$$\displaystyle \lambda_{{{i}}}^{{}}$ = $\displaystyle {\frac{{-a_{2} \pm \sqrt{ a_{2}^{2} - 4 a_{1} a_{3} } }}{{2a_{1}}}}$ (31.26)


a1 = ($\displaystyle \delta$uiII)T$\displaystyle \delta$uiII    
a2 = 2($\displaystyle \delta$uiI)T$\displaystyle \delta$uiII +2($\displaystyle \Delta$u)T$\displaystyle \delta$uiII    
a3 = 2($\displaystyle \Delta$u)T$\displaystyle \delta$uiI + ($\displaystyle \delta$uiI)T $\displaystyle \delta$uiI + ($\displaystyle \Delta$u)T $\displaystyle \Delta$u - $\displaystyle \Delta$l2    

Normally, two solutions for $ \Delta$$ \lambda$ fulfill (31.25) but if the discriminant a22 -4a1a3 < 0 then DIANA uses a linearized equivalent of the Spherical Path method as described by Forde and Stiemer [23]. To determine which of the two regular solutions should be used, the angle $ \theta$ between the displacement increment vector of the previous iteration and the current iteration is calculated for both solutions

cos$\displaystyle \theta$ = $\displaystyle {\frac{{ (\Delta\mathbf{u}_{i-1})^{\mathrm{\scriptscriptstyle{T}}...
...} }}{{ \Vert \Delta\mathbf{u}_{i-1} \Vert \: \Vert\delta\mathbf{u}_{i}\Vert }}}$ (31.28)

If one of the solutions yields a negative cosine and the other a positive, DIANA chooses the solution with the positive cosine (acute angle). If both solutions yield acute angles, the solution closest to the linear solution $ \Delta$$ \lambda$ = - a3/a2 is used.

Updated Normal Plane.

The second constraint is a linearized constraint. If (31.25) is matched for $ \Delta$ui-1 , then the constraint equation for $ \Delta$ui = $ \Delta$ui-1 + $ \delta$ui can be written as

($\displaystyle \Delta$ui-1)T $\displaystyle \delta$ui = 0 (31.29)

where the quadratic term in $ \delta$ui is ignored. Substituting (31.24) into (31.29) leads to the expression for $ \Delta$$ \lambda_{{i}}^{}$

$\displaystyle \Delta$$\displaystyle \lambda_{{i}}^{}$ = - $\displaystyle {\frac{{ ( \Delta \mathbf{u}_{i-1})^{\mathrm{\scriptscriptstyle{T...
...{i-1})^{\mathrm{\scriptscriptstyle{T}}}\delta\mathbf{u}_{i}^{{\mathrm{II}}} }}}$ (31.30)

Geometrically this constraint means that the iterative increment must be perpendicular to the total increment at the previous iteration. The solution is projected on the plane, normal to the previous solution, hence the method is referred to as the Updated Normal Plane method.

Indirect Displacement Control.

In the previous description of the constraint equations all displacements were gathered together. For global nonlinear behavior this is adequate, but for local collapse mechanisms the method can perform better if only a part of the displacements is considered. The constraint equations can remain the same as in (31.25) and (31.29), but instead of using the vectors $ \delta$u and $ \Delta$u vectors $ \delta$v and $ \Delta$v are considered, defined by

v = $\displaystyle \left\{\vphantom{ \negthickspace \begin{array}{c} \alpha_{1} \mat...
...{3} \\  \dots \\  \alpha_{n} \mathrm{u}_{n} \end{array} \negthickspace }\right.$ $\displaystyle \begin{array}{c} \alpha_{1} \mathrm{u}_{1} \\  \alpha_{2} \mathrm...
...  \alpha_{3} \mathrm{u}_{3} \\  \dots \\  \alpha_{n} \mathrm{u}_{n} \end{array}$ $\displaystyle \left.\vphantom{ \negthickspace \begin{array}{c} \alpha_{1} \math...
...3} \\  \dots \\  \alpha_{n} \mathrm{u}_{n} \end{array} \negthickspace }\right\}$ (31.31)

In the extreme case that only one item in v is non-zero, the arc length is defined as the displacement of the corresponding degree of freedom. A constant arc length during the analysis will result in this case in equal displacement increments for this degree of freedom. Because the loading is defined as an external force, this type of control is called Indirect Displacement control. A variant of Indirect Displacement control is Crack Mouth Opening Displacement control, usually called CMOD. This can be used, just as in experiments, to control the increase in crack width et cetera. In case of CMOD control, a vector is formed with new `degrees of freedom' that can e.g. represent the difference in displacements on opposite nodes on a crack plane.

v = $\displaystyle \left\{\vphantom{ \negthickspace \begin{array}{c} \dots \\  \dots...
...}_{p} + \alpha_{2} \mathrm{u}_{q} \\  \dots \end{array} \negthickspace }\right.$ $\displaystyle \begin{array}{c} \dots \\  \dots \\  \alpha_{1} \mathrm{u}_{p} + \alpha_{2} \mathrm{u}_{q} \\  \dots \end{array}$ $\displaystyle \left.\vphantom{ \negthickspace \begin{array}{c} \dots \\  \dots ...
..._{p} + \alpha_{2} \mathrm{u}_{q} \\  \dots \end{array} \negthickspace }\right\}$ (31.32)

This vector is used in the constraint equations (31.25) or (31.29).
As long as the displacement increments per step remain relatively small, the difference between the Spherical Path method and the Updated Normal Plane method are small. More important than the choice between these two methods is the choice of the value for the arc length l . The available Arc-length methods are particularly useful if they are combined with adaptive load incrementation, as described in the next section. Adaptive Loading and Time Increments

Up to here we have used the initial load, displacement or time increment as a fixed value. In an analysis, we could e.g. reach a load level of 100N, by defining ten increments of 10N. In combination with Arc-length control, the size of the increment can change inside an increment, but the start value was still fixed.

The size of the increments is limited by the physical behavior in case of path dependency and by the convergence characteristics of the selected iteration process. Especially in the latter case, the allowable step size depends on the amount of nonlinearity in the increment. This is usually not known a priori, i.e., at the moment that the analysis starts, and therefore the optimum increment sizes cannot be fixed beforehand. To allow for result dependent increment sizes `adaptive loading' can be used. An even more important question that can usually not be answered before the analysis is started is whether at a certain load level the load must increase further or must decrease, e.g. in case of a snap-through.

Three adaptive loading methods are implemented in DIANA, an iteration based method for all types of loading, an energy based method that can only be used in combination with Arc-length control [§], and the cutback based automatic incremental loading method [§]. For the first two methods, two algorithms are available to decide whether the next step must be an increment or a decrement. The cutback based automatic incremental loading method does not allow for loading-unloading switching but offers the possibility for error-controled time increments if the physical behavior explicitly depends on the rate-of-change of the solution.

Iteration Based.

In the iteration based method, the experience is used that for many analyses, the iteration process converges faster if the increment size is smaller. Based on a `desired number of iterations' the new increment size can be made larger than the previous if the actual number of iterations in the previous increment was smaller than the desired number and vice versa. The size of the new load increment t+$\scriptstyle \Delta$t$ \Delta$$ \lambda_{{0}}^{}$ is calculated by

t+$\scriptstyle \Delta$t$\displaystyle \Delta$$\displaystyle \lambda_{{0}}^{}$ =  $\displaystyle {\frac{{ ^{t} \Delta l }}{{ \sqrt{ \delta \mathbf{u}_{0}^{\mathrm{\scriptscriptstyle{T}}}\: \delta \mathbf{u}_{0} } }}}$$\displaystyle \left(\vphantom{ \frac{ {N}^{\mathrm{d}} }{ ^{t}{N}} }\right.$$\displaystyle {\frac{{ {N}^{\mathrm{d}} }}{{ ^{t}{N}}}}$$\displaystyle \left.\vphantom{ \frac{ {N}^{\mathrm{d}} }{ ^{t}{N}} }\right)^{{\negthickspace \gamma}}_{}$ (31.33)

Here t$ \Delta$l is the length of the predictor displacements of the previous step. Further Nd is the desired number of iterations and tN is the actual number of iterations in the previous increment. The power $ \gamma$ is usually set equal to 0.5 .

The iteration based method is also very useful to pass sharp snap-throughs or softening behavior in crack propagation analyses. When parameter $ \gamma$ is set equal to zero, a constant arc length is applied throughout the whole analysis. Experience shows that this method is stable in case of softening behavior. Since

t+$\scriptstyle \Delta$t$\displaystyle \Delta$$\displaystyle \lambda_{{0}}^{}$ = $\displaystyle {\frac{{ ^{t} \Delta l }}{{ \sqrt{ \delta \mathbf{u}_{0}^{\mathrm{\scriptscriptstyle{T}}}\: \delta \mathbf{u}_{0} } }}}$        and        t+$\scriptstyle \Delta$t$\displaystyle \Delta$l =  t+$\scriptstyle \Delta$t$\displaystyle \Delta$$\displaystyle \lambda_{{0}}^{}$$\displaystyle \sqrt{{ \delta \mathbf{u}_{0}^{\mathrm{\scriptscriptstyle{T}}}\: \delta \mathbf{u}_{0} }}$ (31.34)

it follows that the length of the incremental displacement vector remains constant.

Energy Based.

The energy based method can only be used in combination with the Arc-length method. This method calculates a load increment such that the vector product of the load increment and the displacement increment in the first prediction equals the vector product of the final load increment and displacement increment of the previous step [Fig.31.11]. Here tW indicates a kind of final energy increment of step 1 and t+$\scriptstyle \Delta$t$ \tilde{{\mathbf{W}}}$ a first prediction thereof in step 2.
Figure 31.11: Work increment
... 5.069in
\centerline{\raise 6.0cm\box\graph}

The new loading factor is derived from

t+$\scriptstyle \Delta$t$\displaystyle \Delta$$\displaystyle \lambda_{{0}}^{}$ = $\displaystyle \sqrt{{ \frac{ \left\vert ^{t}\Delta\lambda_{n} \: ( ^{t} \Delta\...
...hbf{u}_{0}^{\mathrm{\scriptscriptstyle{T}}}\: \hat{\mathbf{f}} \right\vert } }}$ (31.35)

where index n indicates the last iteration (of the previous increment). In order to get a proper initial value for $ \Delta$$ \lambda$ the vectors $ \delta$u0I and $ \delta$u0II must be calculated with a tangential stiffness matrix.


A simple way to choose between increments or decrements is based on the appearance of negative pivots in the global system of equations. Often a negative pivot indicates unstable structural behavior that is related with some type of snap-through. If this occurs the sign of the load increment must be changed from loading to unloading.

Another method, proposed by Crisfield, can only be used in combination with the Arc-length methods and is similar to the method used in the Spherical Path Arc-length method. The angle between the new prediction and the previous increment should be acute, so

t+$\scriptstyle \Delta$t$\displaystyle \Delta$$\displaystyle \lambda$ = \begin{displaymath}\begin{cases}
+ \left\vert ^{t+\Delta t} \Delta\lambda \righ...
...yle{T}}}\: \delta\mathbf{u}_{0}^{{\mathrm{II}}} < 0 \end{cases}\end{displaymath} (31.36)

In many occasions, both loading and unloading criteria will yield the same result. In case of multiple equilibrium paths (bifurcations) the methods may differ and one should be aware that `an' equilibrium path is followed and not necessarily a stable equilibrium path. Cutback Based Automatic Incremental Loading

Cutback based automatic load stepping is a simple tool for adaptive load increments. Given a final loading, the automatic load step controller tries to take as few load steps as possible and at the same time tries to limit the number of steps in the iterative procedure. The main advantage over the iterations based load controller is that the automatic load controller recovers from non-convergence in the iterative solver.

An outline of the algorithm is as follows.

t+$\scriptstyle \Delta$t$\displaystyle \lambda_{{0}}^{}$ x minsiz $\displaystyle \le$ t+$\scriptstyle \Delta$t$\displaystyle \lambda$ $\displaystyle \le$ t+$\scriptstyle \Delta$t$\displaystyle \lambda_{{0}}^{}$ x maxsiz (31.37)

First the full loading is applied in a single step. If the iterative procedure fails to converge, the load step is decreased by a factor cutb12.3.2.4] and the calculation is restarted. If, after successive failures, the load step becomes smaller than a user-specified part of the full loading, (the MINSIZ parameter [§]) the automatic load controller gives up and issues an error message. On the other hand, if the iterative procedure converges fast with respect to the maximum number of iterations (user-specified parameter MAXITE12.3.5]), the step size is increased. Optionally, the maximum load step size can be limited to a user specified part of the full loading (the MAXSIZ parameter [§]), for example if some intermediate results are needed for output purposes.

Adaptive error controlled time increments.

If the physical behavior of the model explicitly depends on the rate-of-change of the displacements or an internal variable (for instance dynamics, viscous behavior), then the computed solution at a certain time depends on the time step sizes used. If very small time increments are taken, this influence is negligible. However, a priori it is not clear how small the step size should be. Moreover, it is often unattractive to use a fixed time step size. Clearly, one would like to use small time steps if there are rapid changes in the system, and to increase the step size if the system slowly relaxes to an equilibrium state.

To explain the adaptive time step strategy in DIANA we consider the initial value problem

$\displaystyle {\frac{{ \,\mathrm{d}\mathbf{u} }}{{ \,\mathrm{d}t }}}$ = f(u) (31.38)

Suppose that we use a time stepping method to advance the solution from tn to tn+1 with a step size $ \Delta$t = tn+1 - tn . By doing so we introduce an error due to the finite time step size $ \Delta$t . Let u(t) be the solution to (31.38) with initial condition u(tn) = un . The time step error is then

u(tn+1) - un+1 = $\displaystyle \mathcal {O}$($\displaystyle \Delta$tm+1) (31.39)

with m the order of consistency of the method. Suppose that we also have a less accurate method, with order of consistency $ \hat{{m}}$ (with $ \hat{{m}}$ < m ). In DIANA a second order Runge-Kutta method is available, i.e., the SDIRK2 method [§33.4.5], that has an embedded first order method. So we have a second order accurate solution un+1 , and a first order solution $ \hat{{\mathbf{u}}}_{{n+1}}^{}$ . We want the error to be less than a prescribed error tolerance

|un+1 - u(tn+1)|$\displaystyle \le$|$\displaystyle \hat{{\mathbf{u}}}_{{n+1}}^{}$ - un+1|$\displaystyle \le$$\displaystyle \epsilon_{{\mathrm{rel}}}^{}$ x |un+1| + $\displaystyle \epsilon_{{\mathrm{abs}}}^{}$ (31.40)

This criterion is both used to reject time steps and estimate the next time step:31.2

$\displaystyle \Delta$tn+1 = $\displaystyle \Delta$tn x $\displaystyle {\frac{{ \Vert \hat{\mathbf{u}}_{n+1} - \mathbf{u}_{n+1} \Vert^{\...
..._{\mathrm{rel}} \times \Vert \mathbf{u}_{n} \Vert + \epsilon_{\mathrm{abs}} }}}$ (31.41)

Some extra logic is introduced to avoid unnecessary step rejections, and to keep the step size constant to save LU-factorizations. Moreover, care is taken not to overstep changes in the loading.

Automatic time stepping without the SDIRK2 time stepping method is very much like automatic load stepping [§]. Conversely, if SDIRK2 is used while there is no explicit rate dependence in the model, then a time accurate solution is produced. However, if you are not interested in intermediate results it is better, for efficiency reasons, not to use SDIRK2.

next up previous contents index
Next: 31.2 Geometric Nonlinearity Up: 31.1 Incremental-Iterative Solution Previous: 31.1.4 Convergence Criteria   Contents   Index
DIANA-9.4.4 User's Manual - Analysis Procedures
First ed.

Copyright (c) 2012 by TNO DIANA BV.