The application of computer technology in geotechnical engineering has not been ... numerical modelling techniques available to geotechnical engineers, ...
COMPUTING AND COMPUTER MODELLING IN GEOTECHNICAL ENGINEERING J.P. Carter1, C.S. Desai2, D.M. Potts3, H.F. Schweiger4 and S.W. Sloan5
1.0
ABSTRACT A broad review is presented of the role of computing in geotechnical engineering. Included in the discussions are the conventional deterministic techniques for numerical modelling, stochastic techniques for dealing with uncertainty, ‘soft-computing’ tools, as well as modern database software for geotechnical applications. Considerable emphasis is given to the methods commonly used for the solution of boundary and initial value problems. Constitutive modelling of soil and rock mass behaviour and material interfaces is an essential component of this type of computing, and so a review of recent developments and capabilities of constitutive models is also included. The importance of validating computer simulations and geotechnical software is emphasised, and some methodologies for achieving this are suggested. A description of several previously conducted validation studies is included. The paper also includes discussion of the limitations of various numerical modelling techniques and some of the more notable pitfalls. The concepts described in the paper are illustrated with examples taken from research and practice. In presenting these concepts and examples, emphasis has been placed on the behaviour of soil, but it is noted that many of the models and techniques described also have application in rock engineering. 2.0
INTRODUCTION The desire to understand the physical world and to be able to describe it using mathematical concepts and numbers has long been a goal of scientists and engineers. This desire has been evident since at least the time of Pythagoras. The discipline of geotechnical engineering is no exception, as first researchers and now practitioners routinely make use of mathematical models and computer technology in their day-to-day work, trying to understand and predict their world of geomaterials. For example, simple numerical procedures have been used for many years in geotechnical practice in the assessment of strength, analysis of soil consolidation, and estimation of slope stability. With the introduction of electronic computers after World War II came the opportunity for engineers to make much more use of numerical procedures to solve the equations governing their practical problems. This new computing power has made possible the solution of quite complicated non-linear, time-dependent problems, boundary and initial value problems that were once too tedious and intractable using hand methods of calculation. The ready availability of desktop and portable computers has meant that these numerical tools for the solution of boundary and initial value problems are no longer the preserve of academic and research engineers. With the spectacular improvements in hardware have come major developments in software, and very sophisticated packages for solving geotechnical problems are now available commercially. This includes a wide range of software for solving problems from the more routine type, such as limiting equilibrium calculations, to the most powerful non-linear finite element analyses. The availability of this powerful hardware and sophisticated geotechnical software has allowed geotechnical engineers to examine many problems in much greater depth than was previously possible. In 1 2
3
4
5
Challis Professor, Department of Civil Engineering, The University of Sydney, Sydney, NSW, AUSTRALIA Regents Professor, Department of Civil Engineering and Engineering Mechanics, University of Arizona, Tucson, Arizona, USA Professor of Analytical Soil Mechanics, Department of Civil and Environmental Engineering, Imperial College of Science, Technology and Medicine, London, UK Associate Professor, Institute for Soil Mechanics and Foundation Engineering, Graz University of Technology, Graz, AUSTRIA Professor of Civil Engineering, Department of Civil, Surveying and Environmental Engineering, University of Newcastle, Newcastle, NSW, AUSTRALIA
particular, they have allowed the possibility of using numerical methods to examine the important mechanisms that control the overall behaviour in many problems. Further, they can be used to identify the key parameters in any problem, thus indicating areas that require more detailed and thorough investigation. These developments have also meant that generally better quality field and laboratory data are required as inputs to the various models. However, often precise values for some of the input data for these numerical models will not be known, but knowledge of which parameters are most important allows judgements to be made about what additional information should be collected, and where additional resources are best directed in any particular problem. Furthermore, knowledge of the key model parameters together with the results of parametric investigation of a problem can often allow engineering judgements to be made with some confidence about the consequences of a particular decision, rather than in complete or partial ignorance of them. Increasingly, the tools for deterministic modelling of geotechnical problems are being coupled with statistical techniques, to provide a means of dealing with uncertainties in the key problem parameters, and of associating probabilities with the predicted response. The application of computer technology in geotechnical engineering has not been confined to the area of analysis and mathematical modelling. The availability of sophisticated information technology tools has also had noticeable impact on the way geotechnical engineers record, store, retrieve, process, visualise and display important geotechnical data. These tools range from the ubiquitous spreadsheet packages to database software and high-end graphics visualisation tools. This paper provides a broad review of the role of computing and computer technology in geotechnical engineering. A major part of the review includes discussion of some of the more common analytical and numerical modelling techniques available to geotechnical engineers, particularly those used today to obtain solutions on personal computers. The use of the numerical methods and the associated software is illustrated by applications taken from geotechnical research and practice. Throughout this discussion, emphasis is given to the role of these deterministic techniques in identifying mechanisms and providing the user with explanations of observed behaviour, as well as the consequences of planned actions and proposed construction activities. The examples considered involve analysis of strength and deformation of soil, and include pre-failure behaviour as well as the development of failure mechanisms within soil bodies. The need to validate and calibrate numerical models, particularly those now being used commonly in practice is emphasised, and some examples of validation studies are described. Some of the potential pitfalls of numerical analysis are also described. Included in the paper is a brief review of the stochastic and ‘soft computing’ tools that are increasingly being applied in practice to model geotechnical problems. In particular, an application of the artificial neural network technique is described. The role of database packages in geotechnical practice is also illustrated by a recent application on a large-scale construction project. 3.0
GEOTECHNICAL DESIGN Traditionally, geotechnical design has been carried out using simple analysis or empirical approaches. The ready availability of inexpensive, but sophisticated, computer hardware and software has lead to considerable advances in analysis and design of geotechnical structures, with much progress made recently in the application of the modelling techniques to geotechnical structures. In common with other branches of engineering, the design objectives in geotechnics may be identified as follows: • Local stability of the structure and its support system as well as overall stability should be ensured. • The induced movements must be tolerable, not only for the structure being designed but also for any neighbouring structures and services. The design process usually consists of some form of assessment of these important aspects of ground behaviour, and often this will include calculations to provide estimates of stability and the deformations resulting from the proposed works. Analysis therefore provides the mathematical framework for these calculations, and almost invariably the analytical tools are embodied in computer software allowing numerical analysis to be efficiently and conveniently executed. However, it is important to recognise that geotechnical design involves much more than just analysis, and it often includes data gathering prior to analysis, as well as observation and monitoring during and following construction. Construction issues may also be significant and should be included in the decision making process that constitutes geotechnical design.
The analytical tools used most often in the geotechnical design process are still those based on deterministic analysis. It is therefore appropriate to begin a review of computing in geotechnical engineering by discussing the various deterministic analysis methods. However, as there is growing development and use of computing tools that can deal with imprecise information, e.g., stochastic methods, neural networks and fuzzy logic, these will also be discussed later in this paper. As already mentioned, data gathering and processing is also important in geotechnical design, and so computer tools to assist with these tasks are also described. 4.0
DETERMINISTIC GEOTECHNICAL ANALYSIS Usually, geotechnical analysis involves the solution of a boundary value or initial value problem and most often this is achieved by some form of numerical solution procedure. In all geotechnical applications involving numerical analysis it is essential, for economic computer processing and to obtain a reliable numerical solution, to provide a good model of the physical problem. In finite difference and finite element analysis, for example, this normally involves at least two distinct phases, i.e., • idealisation, and • discretisation. Idealisation is achieved by breaking down the physical problem into its component parts, e.g., continuum components such as elastic regions, and discrete structural components such as beams, columns and plates. At this stage of the modelling process, reliable knowledge of Fill the site geology is of paramount importance. In addition, various Sand constitutive models that will be employed in the analysis must be Clay determined at this stage. The Rock final subdivision of the problem (a) Physical problem domain should be only as detailed as is necessary for the purpose at hand. Too much detail only clutters the analysis Material 1 and may cloud important aspects Smooth, rigid Material 2 boundary of the behaviour. The choice of Fixed base an adequate level of detail is a (b) Idealisation matter for experience and judgement. cL Nodal points In a finite element procedure, for example, the idealised model is then further subdivided or Elements discretised using an appropriate subdivision of elements. The (c) Discretisation aim here is usually to satisfy the Figure 1 : Steps in the modelling process governing equations of the problem separately within each element. Details, such as node and element numbers, will be assigned as part of this subdivision process. Each of these phases is illustrated schematically in Figure 1. As indicated previously, most geotechnical analysis involves an assessment of stability and deformation. Once the problem has been idealised, there are four fundamental conditions that should then be satisfied by the solution of the boundary or initial value problem. These are: • equilibrium, • compatibility, • constitutive behaviour, and • boundary and initial conditions. Unless all four conditions are satisfied (either exactly or approximately), the solution of the ideal problem is not rigorous in the mathematical sense.
5.0
METHODS OF ANALYSIS Some of the most common methods of analysis used in geotechnical engineering to solve boundary value problems are listed in Table 1. Included are numerical methods as well as some more traditional techniques that may be amenable to hand calculation. The numerical methods may be classified as follows: • the finite difference method (FDM), • the finite element method (FEM), • the boundary element method (BEM), and • the discrete element method (DEM). Table 1 also provides an indication of whether the four basic requirements of a mathematically rigorous solution are satisfied by each of these techniques. It is clear that only the elastoplastic analyses are capable of providing a complete solution while also satisfying (sometimes approximately) all four solution requirements. The difficulty of obtaining closed-form elastoplastic solutions for practical problems means that numerical methods are the only generally applicable techniques. Detailed descriptions of each of the numerical methods listed in Table 1 may be found in a large number of textbooks (e.g., Zienkiewicz, 1967; Desai and Abel, 1972; Britto and Gunn, 1987; Smith and Griffiths, 1988; Beer and Watson, 1992; Potts and Zdravkovic, 1999, 2000), so there is no need to duplicate such Table 1. Summary of common analysis methods Method of Analysis Limit
Bound Theorems
Elastic
Elastoplastic Analysis
Equilibrium
Lower
Upper
Analysis
Closed-form
Numerical
Equilibrium
Overall 4 Locally 6
4
6
4
4
4
Compatibility
6
6
4
4
4
4 (1)
Boundary Conditions
Force only
Force only
Displacement only
4
4
4
Constitutive Model
Failure criterion
Elastic
Elastoplastic
Any (2)
Collapse Information
4
4
4
6
4
4
Information before Collapse
6
6
6
4
4
4
Comment
Examples (1) (2)
Perfectly rigid plasticity
Simple Safe estimate Unsafe estimate Closed form Complicated Safe or unsafe? of collapse of collapse solutions available Slip circle, Wedge Methods
-
-
Many
Limited
Powerful computer techniques FDM, FEM, BEM and DEM
Inherent and induced material discontinuities can be simulated. Includes perfect plasticity and models that can allow for complicated behaviour such as discontinuous deformations, degradation (softening), and non-local effects.
detail here. However, it is probably worth noting that the FDM, FEM and DEM methods consider the entire region under investigation, breaking it up, or discretising it, into a finite number of sub-regions or elements. The governing equations of the problem are applied separately and approximately within each of these elements, translating the governing differential equations into matrix equations for each element.
Compatibility, equilibrium and the boundary conditions are enforced at the interfaces between elements and at the boundaries of the problem. On the other hand, in the BEM only the boundary of the body under consideration is discretised, thus providing a computational efficiency by reducing the dimensions of the problem by one. The BEM is particularly suited to linear problems. For this reason, and because it is well suited to modelling infinite or semi-infinite domains, the BEM is sometimes combined with the finite element technique. In this case, a problem involving non-linear behaviour in part of the infinite domain can be efficiently modelled by using finite elements to represent that part of the domain in which non-linear behaviour is likely, while also modelling accurately the infinite region by using boundary elements to represent the far field. For all methods, an approximate set of matrix equations may be assembled for all elements in the region considered. This usually requires storage of large systems of matrix equations, and the technique is known as the “implicit” solution method. An alternative method, known as the “explicit” solution scheme, is also employed in some software packages. This method usually involves solution of the full dynamic equations of motion, even for problems that are essentially static or quasi-static. The explicit methods do not require the storage of large systems of equations, but they are known to present difficulties in determining reliable solutions to some problems in statics. In the following, brief details of each of the main methods of numerical analysis are presented together with a summary of their main advantages and disadvantages, and their suitability for various geotechnical problems. Much of this discussion is based on suggestions proposed by Schweiger and Beer (1996). 5.1
Finite Element Method
The finite element method is still the most widely used and probably the most versatile method for analysing boundary value problems in geotechnical engineering. The main advantages and disadvantages for geotechnical analysis may be summarized as follows. 5.1.1 • • • • • • • •
Advantages
nonlinear material behaviour can be considered for the entire domain analysed. modelling of excavation sequences including the installation of reinforcement and structural support systems is possible. structural features in the soil or rock mass, such as closely spaced parallel sets of joints or fissures, can be efficiently modelled, e.g., by applying a suitable homogenisation technique. time-dependent material behaviour may be introduced. the equation system is symmetric (except for non-associated flow rules in elasto-plastic problems using tangent stiffness methods). the conventional displacement formulation may be used for most load-path analyses. special formulations are now available for other types of geotechnical problem, e.g., seepage analysis, and the bound theorem solutions in plasticity theory. the method has been extensively applied to solve practical problems and thus a lot of experience is already available.
5.1.2
Disadvantages
The following disadvantages are particularly pronounced for 3-D analyses and are less relevant for 2-D models. • the entire volume of the domain analysed has to be discretised, i.e., large pre- and post-processing efforts are required. • due to large equation systems, run times and disk storage requirements may be excessive (depending on the general structure and the implemented algorithms of the finite element code). • sophisticated algorithms are needed for strain hardening and softening constitutive models. • the method is generally not suitable for highly jointed rocks or highly fissured soils when these defects are randomly distributed and dominate the mechanical behaviour.
5.2
Boundary Element Method
Significant advances have been made in the development of the boundary element method and as a consequence this technique provides an alternative to the finite element method under certain circumstances, particularly for some problems in rock engineering (Beer and Watson, 1992). The main advantages and disadvantages may be summarized as follows. 5.2.1 • • •
pre- and post-processing efforts are reduced by an order of magnitude (as a result of surface discretisation rather than volume discretisation). the surface discretisation leads to smaller equation systems and less disk storage requirements, thus computation time is generally decreased. distinct structural features such as faults and interfaces located in arbitrary positions can be modelled very efficiently, and the nonlinear behaviour of the fault can be readily included in the analysis (e.g., Beer, 1995).
5.2.2 • • • • •
Advantages
Disadvantages
except for interfaces and discontinuities, only elastic material behaviour can be considered with surface discretisation. in general, non-symmetric and often fully-populated equation systems are obtained. a detailed modelling of excavation sequences and support measures is practically impossible. the standard formulation is not suitable for highly jointed rocks when the joints are randomly distributed. the method has only been used for solving a limited class of problems, e.g., tunnelling problems, and thus less experience is available than with finite element models.
5.3
Coupled Finite Element - Boundary Element Method
It follows from the arguments given above that it should be possible to minimise the respective disadvantages of both methods by combining them. This is in fact true and very efficient numerical models can be obtained by discretising the soil or rock around the region of particular interest, e.g., representing the region around a tunnel by finite elements and the far field by boundary elements (e.g., Beer and Watson, 1992; Carter and Xiao, 1993). Two disadvantages however remain, namely the cumbersome modelling of major discontinuities intercepting the region of interest in an arbitrary direction, e.g., a tunnel axis, and the non-symmetric equation system that is generated by the combined model. The latter problem may be resolved by applying the principle of minimum potential energy for establishing the stiffness matrix of the boundary element region (Beer and Watson, 1992). If this is done, then after assembling with the finite element stiffness matrix, the resulting equation system remains symmetric. 5.4
Explicit Finite Difference Method
The finite difference method does not have a long-standing tradition in geotechnical engineering, perhaps with the exception of analysing flow problems including those involving consolidation and contaminant transport. However, with the development of the finite difference code FLAC (Cundall and Board, 1988), which is based on an explicit time marching scheme using the full dynamic equations of motion, even for static problems, an attractive alternative to the finite element method was introduced. Any disturbance of equilibrium is propagated at a material dependent rate. This scheme is conditionally stable and small time steps must be used to prevent propagation of information beyond neighbouring calculation points within one time step. Artificial nodal damping is introduced for solving static problems in FLAC. The method is comparable to the finite element method (using constant strain triangles) and therefore some of the arguments listed above basically hold for the finite difference method as well. However, due to the explicit algorithm employed some additional advantages and disadvantages may be identified. 5.4.1 •
Advantages
the explicit solution method avoids the solution of large sets of equations.
• •
large strain plasticity, strain hardening and softening models and soil-structure interaction are generally easier to introduce than in finite elements. the model preparation for simple problems is very easy.
5.4.2 • •
•
5.5
Disadvantages
the method is less efficient for linear or moderately nonlinear problems. until recently, model preparation for complex 3-D structures has not been particularly efficient because sophisticated pre-processing tools have not been as readily available, compared to finite element preprocessors. because the method is based on Newton’s law of motion no “converged solution” for static problems exists, as is the case in static finite element analysis. The decision whether or not sufficient time steps have been performed to obtain a solution (below but close to failure) has to be taken by the user, and this judgement may not always be easy under certain circumstances, although several checks are possible (e.g., unbalanced forces, velocity field). Discrete Element Method
The methods described so far are based on continuum mechanics principles and are therefore restricted to problems where the mechanical behaviour is not governed to a large extent by the effects of joints and cracks. If this is the case discrete element methods are much better suited for numerical solution. These methods may be characterised as follows: • finite deformations and rotations of discrete blocks (deformable or rigid) are calculated. • blocks that are originally connected may separate during the analysis. • new contacts which develop between blocks due to displacements and rotations are detected automatically. Several different approaches to achieve these criteria have been developed, with probably the most commonly used methods being the discrete element codes UDEC and 3-DEC (Lemos et al., 1985), which both employ an explicit finite difference scheme, as in the program FLAC. Due to the different nature of a discontinuum analysis, as compared to continuum techniques, a direct comparison seems to be not appropriate. The major strength of the distinct element method is certainly the fact that a large number of irregular joints can be taken into account in a physically rational way. The drawbacks associated with the technique are that establishing the model, taking into account all relevant construction stages, is still very time consuming, at least for 3-D analyses. In addition, a lot of experience is necessary in determining the most appropriate values of input parameters such as joint stiffnesses. These values are not always available from experiments and specification of inappropriate values for these parameters may lead to computational problems. In addition, run times for 3-D analyses are usually quite high. 5.6
Which Method For Which Problem?
Having discussed the main advantages and disadvantages of the most common numerical methods a reasonable question is which method should be used for any particular problem. Of course, the answer to this question will be very problem dependent. In many cases several methods may be appropriate and the decision on which to use will be made simply on the basis of the experience and familiarity of the analyst with these techniques. However, it is possible in certain classes of problem to provide broad guidelines. One example concerns the analysis of tunnels. This problem area has been addressed by Schweiger and Beer (1996), who suggested guidelines for tunnelling by considering the separate problems of shallow and deep tunnels. In making suggestions they discussed the importance of the input parameters and what can be expected from numerical analyses. For example, they concluded that the finite element and finite difference method are most suitable for shallow tunnels in soil, whereas the boundary element and discrete element method are most suitable for deep tunnels in jointed and faulted rocks. They also concluded that the most promising continuum approach is a combination of finite elements and boundary elements, because the merits of each method can be fully exploited as appropriate.
In the following sections of this paper some of the methods of numerical analysis described above and some of their essential components are described further. Their use is illustrated by applications from geotechnical practice. As indicated in Table 1, not all techniques are designed to provide a solution for the complete load-deflection behaviour of a soil or rock structure up to the point of collapse. Some address only the pre-failure behaviour and some the ultimate condition. The first method to be described in detail involves the use of the bound theorems of classical plasticity theory together with special finite element formulations to assess stability. These well-known methods for bracketing the true collapse load do not require a sophisticated constitutive model for the soil or rock mass. They are based on the assumption of a rigid plastic model for the soil or rock and require only the definition of a failure criterion and a plastic flow rule. With recent developments in computer technology, linear and non-linear programming and the finite element method, these classical theorems have a renewed and important role to play in geotechnical analysis, particularly as they may now be used routinely for threedimensional problems. After all, determining when collapse will occur, and avoiding that condition in practice, is one of the fundamental requirements of geotechnical design. 6.0
LIMIT ANALYSIS USING FINITE ELEMENTS Stability analysis in geotechnical engineering has traditionally been carried out using either slip–line field or limit equilibrium techniques. Whilst slip–line methods have the advantage of being mathematically rigorous, they are notoriously difficult to apply to problems with complex geometries or complicated loading. A further shortcoming of these techniques is that the boundary conditions need to be treated specifically for each problem, thus making it difficult to develop general purpose computer programs which can analyse a broad range of cases. Despite these limitations, slip–line analysis has provided many fundamental solutions that are used routinely in geotechnical engineering practice (Sokolovskii, 1965). Limit equilibrium methods, although less rigorous than slip–line methods, can be generalised to deal with a variety of complicated boundary conditions, soil properties and loading conditions. The accuracy of limit equilibrium solutions is often questioned because of the assumptions that are needed to make the method work. Nonetheless, this approach is often favoured by practising engineers because of its simplicity and generality. Another approach for analysing the stability of geotechnical structures is to use the upper and lower bound limit theorems developed by Drucker et al. (1952). These theorems can be used to bracket the exact ultimate load from above and below and are based, respectively, on the notions of a kinematically admissible velocity field and a statically admissible stress field. A kinematically admissible velocity field is simply a failure mechanism in which the velocities (displacement increments) satisfy both the flow rule and the velocity boundary conditions, whilst a statically admissible stress field is one where the stresses satisfy equilibrium, the stress boundary conditions, and the yield criterion. The bound theorems assume the material is perfectly plastic and obeys an associated flow rule. The latter assumption, which implies the strain increments are normal to the yield surface, is often perceived to be a shortcoming for frictional soils as it predicts excessive dilation upon shear failure. For geotechnical problems which are not strongly constrained in a kinematic sense (e.g., those with a freely deforming surface and a semi–infinite domain), the use of an associated flow rule for frictional soils may in fact give good estimates of the collapse load. This important result is discussed at length by Davis (1969) and has been confirmed in a number of finite element studies (e.g., Sloan, 1981). Put simply, the upper bound theorem states that the power expended by the external forces may be equated to the power dissipated in a kinematically admissible failure mechanism to compute an unconservative estimate of the true collapse load. In geotechnical engineering, the simplest form of upper bound calculation is based on a mechanism comprised of rigid blocks where power is dissipated solely at the interfaces between adjacent blocks. Once a kinematically admissible mechanism has been formulated, the best upper bound is found by optimising the geometry of the blocks to yield the minimum dissipated power and, hence, the corresponding collapse load. This type of calculation is very useful for undrained stability analysis of clays where the soil can be modelled using a Tresca yield condition and deformation occurs at constant volume. For drained loading, however, where the soil is often assumed to obey a Mohr–Coulomb failure criterion, this type of computation is more difficult because of the dilation that accompanies plastic shearing along the discontinuities.
The lower bound theorem states that the collapse load obtained from any statically admissible stress field will under–estimate the true collapse load. Generally speaking, the upper bound theorem is applied more frequently than the lower bound theorem to predict soil behaviour, since it is usually easier to construct a good kinematically admissible failure mechanism than it is to construct a good statically admissible stress field. Although an upper bound solution often gives a useful estimate of the ultimate load, a lower bound solution is more desirable in engineering practice as it results in a safe design. The bound theorems are especially powerful when both types of solution can be computed so that the actual collapse load can be bracketed from above and below. This feature is invaluable when an exact solution cannot be determined, since it provides a built–in error check on the accuracy of the approximate collapse load. 6.1
Lower Bound Limit Analysis Formulations
The use of finite elements and linear programming to compute rigorous lower bounds for two–dimensional stability problems appears to have been first proposed by Lysmer (1970). Lysmer’s formulation is based on a simple three–noded triangular element with the nodal normal and shear stresses being taken as the problem variables. Following previous studies, the linearised yield condition is obtained by adopting an internal polyhedral approximation to the parent yield surface, so that each nonlinear inequality is replaced by a series of linear inequalities. By assuming a linear approximation for the stress field inside each element, it can be guaranteed that this yield condition is satisfied throughout the discretised region. The presence of statically admissible discontinuities, which are permitted between adjacent elements, greatly improves the accuracy of the final results. Application of the stress–boundary conditions, equilibrium equations, and linearised yield criterion generates the linear constraints on the stress field, while the objective function, which is maximised, corresponds to the collapse load. In Lysmer’s original formulation, the optimal solution to the linear programming problem, and hence the statically admissible stress field, was isolated by using the simplex algorithm. Though Lysmer’s method represented a significant advance, it had a number of shortcomings. The first of these resulted from the choice of problem variables which, although ingenious, led to a poorly conditioned constraint matrix. The second limitation was one of computational efficiency. With the simplex solution technique used by Lysmer, the analyses had to be restricted to meshes with only a few elements in order to avoid excessive computer times. This is a serious limitation and probably explains why the technique did not achieve the prominence that it deserved. The third and final shortcoming of the method was its inability to generate a complete stress field for a semi–infinite continuum. Semi–infinite soil masses arise frequently in geotechnical engineering, and it is necessary to extend the stress field throughout the entire domain for the solution to be classed as a rigorous lower bound. In Lysmer’s method, the solution obtained is an incomplete one, as the resultant stress field is statically admissible only in the region limited by the boundaries of the finite element mesh. Following Lysmer, other investigators, including Anderheggen and Knopfel (1972), Pastor (1978), and Bottero et al. (1980), proposed alternative two–dimensional lower bound techniques which are based on the linear programming method. These studies led to a number of key improvements, such as the development of extension elements for analysing semi–infinite media and the use of cartesian stresses as problem variables to simplify the formulation. In 1982, Pastor and Turgeman generalised their lower bound technique to deal with the important case of axisymmetric loading. Although potentially powerful, these early lower bound formulations were restricted by the performance of their underlying linear programming (LP) solvers. Because the techniques often lead to very large optimisation problems, their evolution has been linked closely with the development of efficient LP algorithms. Indeed, special features of the lower bound formulation, such as the extreme sparsity of the overall constraint matrix, must be exploited fully to avoid excessive computation times. In the late eighties, Sloan (1988a) introduced a formulation based on an active set algorithm that permits large two–dimensional problems to be solved efficiently on a PC or workstation. Practical applications of this scheme to date include tunnels (Sloan et al., 1990; Assadi and Sloan, 1991; Sloan and Assadi, 1991, 1992), foundations (Utrichton et al., 1998; Merifield et al., 1999) and slopes (Yu et al., 1998). Despite the success of LP based lower bound formulations for the solution of two–dimensional and axisymmetric problems, these approaches are unsuitable for performing general three–dimensional limit
analysis. Although possible, the linearisation process for 3D yield functions inevitably generates huge numbers of linear inequalities. This in turn, results in unacceptably long solution times for any LP solver which conducts a vertex–to–vertex search (such as the traditional simplex method or any active set method). One alternative for finite element formulations of the lower bound theorem is to use nonlinear programming (NLP) solution methods. Irrespective of the approximation chosen to represent the stress field in the continuum, this approach assumes that the yield criterion is employed in its original nonlinear form. Three–dimensional stress fields thus present no special difficulty, apart from the extra variables involved. One such formulation, which uses linear stress finite elements, incorporates nonlinear yield conditions explicitly, and exploits the underlying convexity of the corresponding optimisation problem, has been developed recently by Lyamin and Sloan (1997, 2000a) and Lyamin (1999). In this method, the lower bound solution is found very efficiently by solving the system of nonlinear equations that define the Kuhn–Tucker optimality conditions. The solver used for this purpose, a variant of the one originally developed by Zouain et al. (1993) in their study of mixed limit analysis formulations, is a two–stage quasi–Newton scheme. After accounting for the nature of the lower bound optimisation problem and developing a new deflection strategy in the solution phase, the resulting optimisation procedure is many times faster than an equivalent LP formulation. Indeed, comparisons to date suggest that the new technique typically offers at least a 50–fold reduction in CPU time for large scale two–dimensional applications. The scheme can deal with any (convex) type of yield criterion and permits optimisation with respect to surface as well as body forces, which can be of unknown distribution. Because of its speed, the NLP formulation of Lyamin and Sloan (1997, 2000a) is ideally suited to three–dimensional applications, where the number of unknowns is usually very large. 6.2
Upper Bound Limit Analysis Formulations
General formulations of the upper bound theorem, based on finite elements and linear programming, have been investigated by Anderheggen and Knopfel (1972), Maier et al. (1972) and Bottero et al. (1980). These methods permit plastic deformation to occur throughout the continuum and inherit all of the advantages of the finite element technique, but have tended to be computationally cumbersome due to the large linear programming problems they generate. In the plate studies performed by Anderheggen and Knopfel (1972), an attempt was made to address this shortcoming by suggesting various solution strategies based on the revised simplex optimisation algorithm. This type of algorithm was also used by Bottero et al. (1980), who generalised the method of Anderheggen and Knopfel (1972) to include velocity discontinuities in plane strain limit analysis. In their formulation, a three–noded constant strain triangular element is used to model the velocity field, each node has two unknown velocities, and each triangular element is associated with a specified number of unknown plastic multiplier rates (one for each side of the linearised yield surface). To be kinematically admissible, the velocities and plastic multiplier rates are subject to a set of linear constraints arising from the flow rule and the velocities must match the appropriate boundary conditions. For a given set of prescribed velocities, the finite element formulation works by choosing the set of velocities and plastic multiplier rates which minimise the dissipated power. This power is then equated to the power dissipated by the external loads to yield a strict upper bound on the true limit load. To ensure that the finite element formulation leads to a LP problem, the actual yield surface is linearised by using an external polyhedral approximation. Turgeman and Pastor (1982), in an important generalisation, extended their linear programming scheme to handle axisymmetric problems for the case of a Von Mises or Tresca material. Although significant, the formulations of Bottero et al. (1980) and Turgeman and Pastor (1982) both suffer from the disadvantage that the direction of shearing must be specified for each discontinuity a priori. This precludes the use of a large number of discontinuities in an arbitrary arrangement, since it is generally not possible to determine these directions so that the mode of failure is kinematically acceptable. The revised simplex optimisation procedure used by Bottero et al. (1980) also appears to be rather slow and, indeed, they suggested that a more efficient solution strategy needed to be found. One effective means of solving large, sparse LP problems is the steepest edge active set scheme (Sloan, 1988b). Although it was originally proposed for the solution of problems arising from the finite element lower bound method (Sloan, 1988a), this algorithm has also proved efficient for upper bound analysis (Sloan, 1989). This is because the form of the dual upper bound LP problem is very similar to the form of
the lower bound LP problem, with more rows than columns in the constraint matrix and all of the variables unrestricted in sign. All LP formulations mentioned so far are based on the three–noded triangular element. With this simple element it is necessary to use a special grid arrangement, in which four triangles are coalesced to form a quadrilateral with the central node lying at the intersection of the diagonals. If this pattern is not used, then the elements cannot provide a sufficient number of degrees of freedom to satisfy the incompressibility condition, as discussed in detail by Nagtegaal et al. (1974). In response to this shortcoming, Yu et al. (1994) developed a linear strain element for upper bound limit analysis. A major advantage of using this element, rather than a constant strain one, is that the velocity field can be modelled accurately with fewer elements and the incompressibility condition can be satisfied without resorting to a restrictive grid arrangement. More recently, the upper bound formulation of Sloan (1989) was generalised by Sloan and Kleeman (1995) to allow a large number of discontinuities in the velocity field. In the latter formulation, a velocity discontinuity may occur at any edge that is shared by two adjacent triangles, and the sign of shearing is chosen automatically during the optimisation process to give the least amount of dissipated power. Each discontinuity is defined by four nodes and requires four unknowns to describe the tangential velocity jumps along its length. Although it is still based on a LP solver, the Sloan and Kleeman (1995) formulation is computationally efficient for two–dimensional applications and gives good estimates of the true limit load with a relatively coarse mesh. Moreover, it does not require the elements to be arranged in a special pattern in order to model the incompressibility condition satisfactorily. As the finite element formulation of the upper bound theorem is inherently nonlinear, a number of upper bound formulations have been based on nonlinear programming (NLP) methods. Early studies focused mainly on plates and shells and include the work of Hodge and Belytschko (1968), Biron and Chasleux (1972), and Nguyen et al. (1977). The most commonly used solution algorithm in these formulations is the sequential unconstrained minimisation technique (McCormick and Fiacco, 1963) with a Carroll penalty function. Unfortunately, this scheme is not computationally attractive for large scale problems and few practical applications have been considered in the literature. Another drawback, which also persists in more recent upper bound formulations, such as those of Huh and Yang (1991) and Capsoni and Corradi (1997), is that these formulations can only be used with a limited variety of yield functions. In a completely different vein, Jiang (1994) proposed an upper bound formulation based on visco–plasticity theory. Following the practice of others, Jiang (1994) employed an augmented Lagrangian method to solve the resulting nonlinear optimisation problem, but implemented it in conjunction with the algorithm of Uzawa. One year later, Jiang (1995) demonstrated that the same nonlinear programming scheme can be applied to direct upper bound analysis. Despite its good performance for two–dimensional examples, Jiang’s formulation has not been extended to deal with three–dimensional problems. More importantly, because the method does not permit discontinuities in the velocity field, it is unlikely to give accurate results for cohesive–frictional materials. A straightforward method for evaluating the ultimate loads of structures has been suggested recently by de Buhan and Maghous (1995). This scheme is based upon the kinematic approach of yield design theory and leads to the problem of minimising a function of a finite number of variables without constraints. In their implementation, the optimisation procedure was carried out by means of the simplex method of Nelder and Mead (1965). Though this formulation was claimed to be computationally efficient, it needed several minutes of CPU time just to solve a plane strain problem with a grid of around one hundred triangles. These timings suggest the technique would have difficulty in coping with large scale three–dimensional geometries, where the number of unknowns is typically much greater than two–dimensional cases. In the same year as the work of de Buhan and Maghous, Liu et al. (1995) proposed a method for performing three–dimensional upper bound limit analysis which uses a direct iterative algorithm. The basic characteristic of this algorithm is that, at each iteration, the rigid zones are distinguished from the plastic zones and are constrained accordingly. This involves modifying the goal function and the constraint conditions and neatly avoids the numerical difficulties that are caused by undetermined rigid zones and an undifferentiable objective function. In essence, the problem is reduced to one of solving a series of relevant elastic problems. According to their paper the process is efficient, numerically stable, and can be implemented easily in an existing displacement finite element code. A new two– and three–dimensional upper bound formulation, which is based on nonlinear programming, has very recently been proposed by Lyamin and Sloan (2000b). This scheme uses a similar solver to their lower bound method (Lyamin and Sloan 1997, 2000a) and employs nodal velocities, element stresses, and a
set of discontinuity variables as the unknowns. Over each element, the velocity field is assumed to vary linearly while the stresses are assumed constant. As in the formulation of Sloan and Kleeman (1995), velocity discontinuities are permitted at shared edges between two elements, and the sign of shearing is chosen automatically to minimise the dissipated power. The upper bound solution is found by using a two–stage quasi–Newton scheme to solve the system of nonlinear equations that define the Kuhn–Tucker optimality conditions. This strategy is very efficient, with preliminary comparisons suggesting a 100–fold speedup over an equivalent linear programming formulation for large scale two–dimensional applications. The scheme can deal with any (convex) type of yield criterion and permits optimisation with respect to both surface and body forces. Because of its speed, the NLP formulation of Lyamin and Sloan (2000b) is well suited to three–dimensional applications.
Figure 2 : Lower bound mesh for strip footing
6.3
Figure 3 : Upper bound mesh for strip footing
Applications
Some typical soil stability problems are now studied to demonstrate the efficiency of the upper and lower bound formulations of Lyamin and Sloan (1997, 2000a, 2000b). Three different meshes have been generated for each of the examples and these can be treated as coarse, medium and fine models of the original problem. All runs use linear elements and the timings are for a Dell Precision 220 PC with a Pentium III 800MHz processor. 6.3.1
Rigid Strip Footing On Cohesive–Frictional Soil
The exact collapse pressure for a rigid strip footing on a weightless cohesive–frictional soil with no surcharge is given by the Prandtl (1920) solution: q π φ′ = exp(π tanφ ′)tan 2 + − 1 cotφ ′ c′ 4 2
(1)
where c′ and φ′ are, respectively, the effective cohesion and the effective friction angle. For a soil with a friction angle of φ′ = 35° this equation gives q/c′ = 46.14. The boundary conditions and material properties used in the lower and upper bound analyses, together with some typical meshes, are shown in Figure 2 and Figure 3.
The lower bound results presented in Table 2 compare the performance of a recent nonlinear programming formulation (Lyamin and Sloan 1997, 2000a) with that of an equivalent linear programming formulation (Sloan 1988a, 1988b). The newer procedure demonstrates fast convergence to the optimum solution and, most importantly, the number of iterations required is essentially independent of the problem size. For the coarsest mesh, the nonlinear formulation is nearly four times faster than the linear programming formulation and gives a lower bound limit load that is 2% better. For the finest mesh the speed advantage of the nonlinear procedure is more dramatic, with a 54–fold reduction in the CPU time. In this case, the lower bound limit load is 1.3% below the exact limit load and the analysis uses just 3.6 seconds of CPU time. Because the number of iterations with the new algorithm is essentially constant for all cases, its CPU time grows only linearly with the problem size. This is in contrast to the linear programming formulation, where the iterations and CPU time grow at a much faster rate, and the CPU time savings from the nonlinear method become larger as the problem size increases. Table 2. Lower bounds for smooth strip footing on weightless cohesive–frictional soil. Mesh q/c′ coarse medium fine
36.70 42.52 43.99
Linear programming NSID=24 Iter. Error* CPU (Sec) (%) -20.0 1.19 288 -7.8 25.9 1845 -4.6 197 5552
Nonlinear programming q/c′ 37.64 43.79 45.53
Error* (%) -18.4 -5.1 -1.3
CPU (Sec)
Iter.
0.31 1.72 3.63
21 33 29
Ratio of LP/NLP values Collapse CPU Iter. pressure 0.98 0.97 0.97
3.8 15 54
13.7 56 191
* With respect to exact Prandtl (1920) solution qexact = 46.12c′ NSID = number of sides in linearised yield surface.
Table 3. Upper bounds for smooth strip footing on weightless cohesive–frictional soil Mesh q/c′ coarse medium fine
50.48 48.77 48.03
Linear programming NSID=24 Iter. Error* CPU (Sec) (%) +9.4 6.76 1396 +5.7 133.7 6875 +4.1 2105 16771
Nonlinear programming q/c′ 49.82 48.00 47.30
Error* (%) +8.0 +4.0 +2.5
CPU (Sec)
Iter.
1.82 5.5 13.6
28 29 30
Ratio of LP/NLP values Collapse CPU Iter. pressure 1.01 1.02 1.02
3.7 24.3 155
49.9 237 559
* With respect to exact Prandtl (1920) solution qexact = 46.12c′ The upper bound results presented in Table 3 compare the performance of a new nonlinear programming formulation (Lyamin and Sloan, 2000b) with that of an equivalent linear programming formulation (Sloan 1989, 1988b; Sloan and Kleeman, 1995). As in the lower bound analyses, the new procedure demonstrates fast convergence to the optimum solution with an iteration count that is essentially independent of the problem size. For the coarsest mesh, the nonlinear formulation is nearly four times faster than the linear programming formulation and gives an upper bound limit load which is 1% better. For the finest mesh the speed advantage of the nonlinear procedure is more dramatic, with a 155 fold reduction in the CPU time. In this case, the upper bound limit load is 2.5% above the exact limit load and the analysis uses just 13.6 seconds of CPU time. Comparing the results in Table 2 and Table 3 it may be seen that, for the fine meshes, the nonlinear programming formulations bracket the true collapse load to within 3.8%. The solution times required to achieve this level of accuracy are modest, and the methods can be run easily on a desktop PC. For a similar
discretisation, the lower bound scheme is generally more accurate than the upper bound scheme, especially for analyses involving frictional soils with high friction angles. 6.3.2
Square And Rectangular Footings On Weightless Cohesive–Frictional Soil
Rigorous solutions for the bearing capacity of rectangular footings are difficult to derive and approximations are usually adopted in practice. The most common approach is to apply empirical shape factors to the classical Terzaghi formula for a strip footing (see, for example, Terzaghi, 1943 and Vesic, 1973, 1975). It is of much interest to compare the lower bound bearing capacity values for square and rectangular footings with approximations that are commonly used in practice. Many empirical and semi–empirical engineering solutions for the bearing capacity q have been proposed, but focus here is placed on two that use the modified Terzaghi (1943) equation: 1 q = λcs λcd λci c ′N c + λqs λqd λqi q s N q + λγs λγd λγi γBN γ 2
(2)
where B is the footing width, q s is the ground surcharge, γ is the soil unit weight, N c, Nq, Nγ are bearing capacity factors, λ cs, λqs, λγs are shape factors, λcd, λqd, λγd are embedment factors and λci, λqi, λγi are inclination factors. In the case of a surface footing on a weightless soil with vertical loading, Equation (2) reduces to the simple form: q = λ cs c ′N c
(3)
In his original study, Terzaghi (1943) presented a table of values for Nc and suggested the use of λcs =1.3 for square and circular footings. Thirty years later Vesic (1973), on the basis of experimental evidence, advocated the shape factors λcs = 1+(B/L)(Nq/Nc) for rectangular footings and λcs = 1+(Nq/Nc) for square footings. In these approximations, N c and N q are the Prandtl values N c = (Nq–1)cotφ′ and Nq = exp(π tanφ′ )tan2(π/4+φ′ /2). Table 4. Predicted bearing capacities q/c′ = λcsNc for rough square footing and rough rectangular footings on weightless cohesive–frictional soil.
φ′(°)
Square
Rectangular (L/B = 2)
Rectangular (L/B = 5)
Lower bound
Terzaghi (1943)
Vesic (1973)
Lower bound
Vesic (1973)
Lower bound
Vesic (1973)
0
5.54
7.41 (25%) λcs=1.3 Nc=5.7
6.14 (10%) λcs=1.19 Nc=5.14
5.35
5.64 (5%) λcs=1.10 Nc=5.14
5.17
5.34 (3%) λcs=1.04 Nc=5.14
10
9.83
12.48 (21%) λcs=1.3 Nc=9.6
10.82 (9%) λcs=1.30 Nc=8.35
9.16
9.59 (4%) λcs=1.15 Nc=8.35
8.55
8.84 (3%) λcs=1.06 Nc=8.35
20
20.33
23.14 (12%) λcs=1.3 Nc=17.8
21.23 (4%) λcs=1.43 Nc=14.83
17.56
18.03 (3%) λcs=1.22 Nc=14.83
15.58
16.11 (3%) λcs=1.09 Nc=14.83
Table 4 summarises various bearing capacity values for a rough square footing and two types of rough rectangular footing. These results were obtained using the boundary conditions and mesh layout shown in Figure 4. All of the cases assume a cohesive–frictional soil with zero self–weight and results are presented for the lower bound method, the original Terzaghi (1943) method, and the Vesic (1973) method. The Terzaghi estimates exceed the square footing lower bounds by an average of around 19%, and would appear
to be unsafe. The Vesic (1973) approximations, on the other hand, are within 5% of the rigorous lower bound results for rectangular footings of all shapes, and their accuracy increases with increasing friction angle. The average number of nodes for the finite element models considered here was around 12000, while the largest mesh analysed had 13392 nodes, 3348 elements and 6444 discontinuities. Bearing in mind that each of these nodes has six unknown stresses, these grids generate very large optimisation problems and it comes as no surprise that the method is computationally demanding. On average, the CPU time for the analyses shown in Table 4 was 4500 set for a Dell Precision 220 PC with a Pentium III 8OOMHz processor. Although large, these timings are certainly competitive with those that would be needed for a threedimensional incremental analysis with the displacement finite element method. Moreover, the solutions have the advantage that the limit load, which is obtained explicitly and does not have to be inferred from a load-deformation plot, is a rigorous lower bound on the true collapse load. regular mesh
Mesh c’
nodes elements discontinuities
z
#f z ;t200 y=o
1 13392 3348 6444 I
Figure 4 : Smooth rigid rectangular footing on cohesive-frictional soil: (a) general view with applied boundary conditions and (b) typical generated mesh CONSTITUTIVE
MODELS FOR GEOMATERIALS
AND INTERFACES
As identified previously, one of the primary aims of geotechnical design is to ensure an adequate margin of safety, and this normally involves some form of stability analysis. In the previous section, it was demonstrated how modern computing methods could be combined with the classical limit theorems of plasticity to conduct very accurate and very sophisticated stability analyses of geotechnical problems. Of course, these analyses provide no information about the behaviour of the soil or rock mass prior to collapse. They are not designed to do so. In order to predict the response prior to collapse a complete constitutive model is required for the soil or rock material. Such models must be capable of predicting not only the onset of failure, but also the complete stress-strain response leading up to failure. In many cases the ability to predict the post-failme (post-peak) behaviour is also desirable. For this type of calculation a complete constitutive model is required. It is often said that the developments in computer solution methods are far ahead of those related to the characterisation of the mechanical behaviour of geomaterials and interfaces or joints. Fortunately, the recent research emphasis placed on the development and application of constitutive models, so as to account for important factors that were not included in previous empirical and simplified models, has been well directed. Unless the materials and interfaces in geotechnical systems are characterised in such a way that they realistically account for the important factors that influence the material behaviour, results from sophisticated computer methods will have only limited validity, if at all. The behaviour of geomaterials and interfaces is affected significantly by factors such as the state of stress or strain, the initial or in situ conditions, the applied stress path, the volume change response, and environmental factors such as temperature and chemicals, as well as time dependent effects such as creep
15
and the type of loading (static, repetitive and dynamic). In addition, natural geomaterials are often multiphase, and include fluids (water) and gas as well as solid components. The pursuit of the development of constitutive models for these complex materials has involved progressive consideration of the foregoing factors. Most often, the models that have been developed are designed to allow for a limited number of factors only. Recently, efforts have been made to develop more unified models that can allow inclusion of a larger number of factors in an integrated mathematical framework. A review of some of the recent developments is presented below, together with comments on their capabilities and limitations. In view of space limitations and the large number of available publications, such a review can only be brief and somewhat selective. 7.1
Elasticity Models
The theory of linear elasticity has a long history of application to geotechnical problems. Initially, emphasis was placed on the use of linear elasticity for a homogeneous isotropic medium. With the development of numerical solution techniques, the behaviour of non-homogeneous and anisotropic media was addressed. Nonlinear elastic or piecewise linear elastic models were subsequently developed to account mainly for the influence of the state of stress or strain on the material behaviour. These include the models based on the functional representation of one or more observed stress-strain and volumetric response curves. The hyperbolic representation (Kondner, 1963; Duncan and Chang, 1970; Desai and Siriwardane, 1984; Fahey and Carter, 1993) has been often used for static and quasi-static behaviour, and can provide satisfactory prediction of the load-displacement behaviour under monotonic loading. However, such functional simulation of curves is not capable of accounting for other factors such as stress path, volume change, and repetitive and dynamic loading. Furthermore, they are usually not appropriate when evaluation of stresses and deformations (strains) are required in local zones in geotechnical systems. Over the past few decades, great emphasis has been placed on research into the small strain behaviour of geomaterials. This has been seen as a fundamental issue for many problems in geotechnical engineering (e.g., Burland, 1989; Atkinson et al., 1990; Atkinson, 1993; Ng et al., 1995; Puzrin and Burland, 1998) In parallel with detailed experimental work to investigate the small strain response (e.g., Shibuya et al., 1995; Jamiolkowski et al., 1999), several constitutive models that include the influence of strain level on the stress-strain response have been developed. There are also examples of their application to boundary value problems documented in the literature (e.g., Jardine et al., 1986; Powrie et al., 1998; Addenbrooke et al., 1997; Schweiger et al., 1999). 7.2
Hyperelasticity And Hypoelasticity
Hyperelastic or higher order elastic models have been considered for geological materials. However, they do not allow for factors such as stress path and irreversible deformations. Hence, their applications to geomaterials have been quite limited. However, it has been demonstrated that hypoelasticity and its combination with plasticity can provide useful models for some geomaterials (e.g., Desai and Siriwardane, 1984; Kolymbas, 1988; Desai, 1999). 7.3
Plasticity Models
The available elastoplastic models can be divided into two categories: (a) classical plasticity models, and (b) continuous yielding or hardening plasticity models. In the classical models, such as those based on the von Mises, Mohr-Coulomb and Drucker-Prager criteria, plastic yielding occurs after the elastic response, when a specific yield stress or criterion is exceeded. These models are capable of predicting ultimate or failure stresses and loads. However, they cannot allow properly for factors such as continuous yielding that in many cases occurs from the very beginning of loading, volume change response and the effect of stress path on the failure strength. Models based on the critical state concept (Roscoe et al., 1958), cap models (e.g., DeMaggio and Sandler, 1971) and their various modifications were proposed largely to allow for the continuous yielding
response exhibited by many geological materials. As a result, they have been often used in computer methods. At the same time, it should be recognised that they do suffer from some limitations, including the following: • they cannot predict dilative volume change before the peak stress, • they do not allow for different strengths mobilised under different stress paths, • they do not include the effect of deviatoric plastic strains on the yielding behaviour, and • they are usually based on associative plasticity, and hence they do not always adequately include the effects of friction. In order to overcome these limitations, a number of advanced hardening models have been proposed with continuous yield surfaces. These include the hierarchical single surface (HISS) model (Desai et al., 1986; Desai, 1995; 1999), as well as those proposed by other investigators (e.g., Lade and Duncan, 1975; Kim and Lade, 1988; Nova, 1988; Lagioia and Nova, 1995; Whittle, et al., 1994). Kinematic and anisotropic hardening models have been proposed to account for cyclic behaviour under dynamic loading (Mroz, 1967; Dafalias, 1979; Mroz et al., 1978; Prevost, 1978; Somasundaram and Desai, 1988; Wathugala and Desai, 1993). Most of these models involve moving yield surfaces and may involve associated computational difficulties. Models based on the unified disturbed state concept (DSC), do not involve moving surfaces, and generally they are computationally more efficient than the more advanced hardening and softening models (Katti and Desai 1995; Shao and Desai, 2000). 7.4
Hypoplasticity
Of course, other possibilities of describing the mechanical behaviour of soils have been proposed in the literature such as hypoplastic formulations (e.g., Kolymbas, 1991; Niemunis and Herle, 1997). Contrary to elastoplasticity, in hypoplastic models no distinction is made between elastic and plastic deformation and no yield and plastic potential surfaces or hardening rules are needed. The aim in developing hypoplastic models for soils is to discover a single tensorial equation that can adequately describe many important features of the mechanical behaviour of the material. It is also desirable that the parameters of the model for a granular material depend directly on the properties of the grains. There are several useful treatments in the literature of hypoplastic models applied to soils (e.g., Kolymbas, 1991; Gudehus, 1996; von Wolffersdorff, 1996; Wu et al., 1996; Bauer, 1996; Herle and Gudehus, 1999). Despite their apparent simplicity some indeed are able to reproduce features of real soils, such as pressure and density coupling, dilatancy, contractancy, variable friction angle and stiffness. Although some of the features of these models seem to be very attractive, so far they have been applied only to sands. More experience in the application of these models is still required. 7.5
Degradation And Softening
Plasticity based models that allow for softening or degradation have also been proposed. In these models the yield surfaces expand during loading up to the peak stress, and then during softening they are allowed to contract. This approach may suffer from non-uniqueness of computer solutions because the discontinuous nature of the material is not adequately taken into account. The classical continuum damage models (Kachanov, 1986) allow for degradation in the material stiffness and strength due to microcracking and resulting damage. However, since the coupling between the undamaged and damaged parts is not accounted for, these models are essentially local and can entail spurious mesh dependence when used in numerical solutions. In order to render damage models to be nonlocal, various enrichments have been proposed. These include consideration of microcrack interaction on the observed response (Bazant, 1994), and gradient and Cosserat theories (de Borst et al., 1993; Mühlhaus, 1995). Such enhanced models can provide computations that are independent of the mesh layouts. However, they can be complex and may entail other computational difficulties. The DSC also allows for the coupling within its framework, and hence, leads to non-local models that are free from spurious mesh dependence (Desai et al., 1997; Desai, 1999). The problem of strain localisation that accompanies softening and degradation has received a great deal of attention in the literature, ever since the developments in numerical methods made possible the solution of boundary value problems with such material features. It is in problems such as these that the inherent material response, i.e., softening, and the numerical solution strategy have a strong interaction. Numerical
procedures that deal inappropriately with post-peak material response are likely to provide spurious answers. It is therefore worth providing a brief summary of the main approaches that have been developed to deal with the problem of strain localisation. It should be noted here that to date the methods discussed below have been applied almost exclusively to two-dimensional (plane strain) problems. 7.5.1
Enhanced Finite Element Models
In this approach the domain undergoing localisation is divided into two regions: the localised zone in which large displacements take place and the non-localised zone in which the strains remain small. The actual boundary of these two zones is not very well defined, but provided that the dimension of the representative domain under consideration is large compared to the thickness of the localised zone, the shear band may be treated, at least conceptually, as a plane of discontinuity in displacement. Two conceptually different approaches have been proposed. One way is to use special finite elements with appropriate discontinuous or modified shape functions (Belytschko et al., 1988; Ortiz et al., 1987). In this approach, firstly, a bifurcation analysis is carried out at the element level. When the onset of localisation is detected, the element interpolation is extended by adding to it suitably defined shape functions that reproduce the localised deformation modes. The other approach is to introduce the effect of the shear band after the onset of localisation through a constitutive framework (Pietruszczak and Mróz, 1981), which is generally known as the homogenisation technique. 7.5.2
Non-Local Models
A fully non-local model is obtained by introducing a relationship between the average stresses and the average strains, which complies with the relationship between macroscopic and microscopic stress for granular bodies. The macroscopic stress can be thought of as some average of the more rapidly varying microscopic stress (Brinkgreve and Vermeer, 1995). In most applications, the non-local formulation is restricted to a specific part of the constitutive equation. Bazant and Lin (1988) averaged the invariant of plastic shear strains, but alternatively an average of the plastic strain rate tensor could be taken and then the invariant formulated. Either approach leads to a set of rate equations, which remain elliptic after the onset of localisation. A modified non-local model overcoming some of the numerical problems generally associated with nonlocal models has been presented by Vermeer and Brinkgreve (1994) and Brinkgreve and Vermeer (1995) with the considerable advantage of a relatively easy implementation into existing finite element codes. The numerical results for a purely cohesive material presented by Brinkgreve and Vermeer (1995) showed that the results are mesh independent. 7.5.3
Gradient Plasticity Models
Gradient dependent models can be viewed as an approximation of fully non-local models. The spatial derivatives of the inelastic state variables enter the continuum description of the material in addition to the inelastic state variables. The gradient terms can be introduced either into the flow rule (Vardoulakis and Aifantis, 1991) or the dilatancy constraint (Vardoulakis and Aifantis, 1989). Due to the gradient term the tangent stiffness matrix becomes non-symmetric even for associated plasticity. Although gradient plasticity theories are highly versatile for describing localisation of deformation in a continuum, a disadvantage of the approach is the introduction of an additional variable at global level in addition to the conventional displacement degrees of freedom. 7.5.4
Micropolar Continua
In this approach rotational degrees of freedom are added to the conventional translational degrees of freedom. Additional strain quantities (relative rotation and micro-curvatures) and additional stress quantities (couple stresses) enter into the continuum description (Mühlhaus and Vardoulakis, 1987; de Borst, 1991). Since these static and kinematic components are related through a ‘bending’ modulus, which has the dimension of length incorporated, an internal length scale is introduced, even in the elastic regime. The determination of the material properties from test data has been given relatively little attention so far. A further disadvantage is that the rotations emerge as additional degrees-of-freedom at global level, which increases the computational effort.
More recently Cosserat continua have been successfully applied to model strain localisation within the framework of hypoplasticity (Tejchman and Bauer, 1996) using the mean grain diameter as the characteristic length. Although results look promising the required fineness of the finite element mesh in order to avoid mesh dependence seems to prohibit practical applications without significant further enhancements. Cosserat-type models have also been adapted to the solution of boundary value problems in rock mechanics, for cases where the rock mass has a distinct structure, e.g., foliation (Adhikary et al., 1995, 1996) 7.5.5
Viscoplastic Models
In viscoplasticity the strain rate included in the constitutive equation prevents the set of equations describing the dynamic motion of the softening solid from becoming elliptic. Failure modes are usually accompanied by high strain rates and therefore the inclusion of the strain rate into the constitutive equation seems natural (Perzyna, 1992). This kind of approach is purely mathematical in the sense that the additional parameters needed are not directly derivable from experiments and may have little physical meaning. Therefore a semi-inverse approach is needed (Sluys and de Borst, 1992). A basic drawback of the use of viscosity to restore wellposedness of the governing equations is the fact that the regularising effect gradually vanishes for the rateindependent limit or a very slow process, i.e., the method is generally not suitable for static loading. Some recently published results by Oka et al. (1994) however suggest that by introducing additional material functions the viscoplastic approach still offers some potential for analysing quasi-static strain localisation problems. 7.5.6
Mesh Adaptivity Techniques
Mesh adaptation is a numerical approach to capture the localisation pattern by refining the mesh in the regions where the strain gradients are highest and vice versa for regions with low strain gradients. The total number of elements may also be increased so that the global error associated with the mesh meets certain accuracy requirements (Hicks, 1995). In the adaptive procedure the first step is to define the remeshing criterion. This is closely linked to error estimation (Zienkiewicz and Zhu, 1987) and the mesh is redefined when the specified error has been exceeded. It is also possible to redefine the mesh at fixed intervals, but an error analysis is still needed in forming the new mesh. The second step is to generate the new mesh by increasing the mesh in the areas where the error is highest. The third step is to map the current state variables, i.e., stresses, strains and strain hardening or softening parameters, from the old mesh to the new one, in a process which is called smoothing. For both these steps several possibilities exist and the final accuracy of the analysis is dependent on the choices made. With standard finite elements, the mesh geometry may induce the formation of unrealistic and wrong failure mechanisms - especially when low order triangles are used (Pastor et al., 1992); this is overcome by using adaptive meshes. However one problem still remains: as the mesh is refined, the shear band thickness will decrease to zero leading to zero energy dissipation. Therefore, other of limitations need to be imposed, such as the minimum element size should be limited to a finite proportion (e.g., 1/3) of the actual shear band width (Zienkiewicz et al., 1995). 7.6
Saturated And Unsaturated Materials
The development of models for saturated geomaterials has been the subject of research for a long time. Terzaghi’s effective stress concept (Terzaghi, 1943) is considered to be the earliest model for saturated soils and forms the basis of most of the stress-strain models developed since then. Constitutive models capable of describing at least some aspects of the complex behaviour of partially saturated soils were first explored seriously in relation to foundation problems associated with shrinking and swelling clay soils (e.g., Aitchison, 1956, 1961; Richards, 1992). The recent emphasis on geoenvironmental engineering has spurred considerable renewed interest in the characterisation of constitutive models for partially or unsaturated materials. The literature on the subject is now wide in scope and a review is available in the text by Desai (1999). Some of the recent models based on the critical state concept, plasticity theory and the DSC are given in publications by Alonso, et al. (1990), Bolzon et al. (1996), Pietruszczak and Pande (1996), and Geiser et al. (1997).
7.7
Liquefaction
Liquefaction represents the phenomenon of instability of the saturated geomaterial’s microstructure at certain critical values of stress, pore water pressure, (plastic) strains and plastic work or dissipated energy, as affected by the initial conditions, e.g., initial effective mean pressure and physical state (density). A number of empirical approaches, based on experimentally observed states of pore water pressure and stress at which liquefaction can occur, and index properties have often been used to predict the onset of liquefaction (Casagrande, 1976; Castro and Poulos, 1977; Seed, 1979; National Research Council, 1985; Ishihara, 1993). Although such conventional approaches have provided useful results, they are not based on mechanistic considerations that can account for the microstructural modifications in the deforming material. Dissipated energy has been proposed as a criterion for the identification of liquefaction potential (NematNasser and Shokooh, 1979; Davis and Berrill 1982, 1988; Figueroa et al., 1994; Desai, 2000). Recently, the DSC has been proposed as a mechanistic procedure that is fundamental yet simple enough for practical application to liquefaction problems (Desai et al., 1998; Park and Desai 1999; Desai, 2000). 7.8
Comments
Most of the constitutive models in the past have addressed a limited number of factors at any time. As a result, separate models are often warranted for different behavioural features of the same material. This can lead to complexities such as a greater number of parameters, which is the sum of the parameters for each model, for given characteristic(s). Hence, it is desirable and useful to develop unified models that can permit consideration of the behavioural features as special cases. A unified approach can lead to compact models that are significantly simplified, involve fewer parameters, and are easier to implement in computer procedures. Such a unified concept, called the disturbed state concept (DSC), has been developed and found to be successful for numerous geological materials and interfaces. The DSC allows implicitly for the nonlocal effects and the characteristic dimension and as a result does not suffer from spurious mesh dependence (Desai, 1999). Because of its potentially unifying qualities, a brief summary of the DSC is provided in the following section. 7.9
Disturbed State Concept (DSC)
The recently developed DSC represents a unified approach for constitutive modelling of geomaterials and interfaces and joints. It allows for various factors such as elastic, plastic and creep strains, microcracking leading to degradation or damage, and stiffening or healing under thermomechanical loading. Its hierarchical framework permits the user to adopt specialised version(s), e.g., including elasticity, plasticity, viscoplasticity, with disturbance (degradation and stiffening),
Figure 5 : Schematic illustration of stress-strain behaviour and material disturbance assumed in DSC models
depending upon the material and application need. Details of the DSC are given in various publications (Desai, 1995, 2000; Desai et al., 1986, 1991, 1995, 1997, 1998; Desai and Salami, 1987; Desai and Varadarajan, 1987; Desai and Fishman, 1991; Desai and Ma, 1992; Desai and Rigby, 1997; Desai and Toth, 1996; Liu et al., 2000; Park and Desai, 1999; Shao and Desai, 2000; Wathugala and Desai, 1993) and a textbook (Desai, 1999). The essential idea of the DSC is that the response of a material to variations in both internal and external conditions can be described in terms of the responses of the constituent material parts in two reference states, the “relatively intact” state (RI) and the “fully adjusted” state (FA). The overall response of the material, i.e., the observed response, is then related to the responses at the reference states through a disturbance function, which provides a coupling and interpolation mechanism between the responses of the material in the RI and FA states. A summary of the formulation of DSC models is provided below. 7.9.1
Incremental Equations
The incremental constitutive equations for the DSC are given by:
(
dσ a = (1 − D )dσ i + Ddσ c + dD σ c − σ i
or
(
)
dσ a = (1 − D )C i dε i + D C c dε c + dD σ c − σ i
(4)
)
(5)
where σ and ε are the stress and strain vectors, respectively, and superscripts a, i, and c denote observed, relatively intact (RI), and fully adjusted (FA) states, respectively. D is the disturbance assumed to be scalar function (it can also be treated as a tensiorial quantity), dD denotes increment or rate of disturbance, and C denotes the constitutive matrix. In the DSC, a material element during deformation is considered to be composed of material parts in different reference states, as indicated in Figure 5. For example, in the case of a dry material, the initial continuum state is considered to be a reference state and is called the relative intact (RI) state. During deformation, the material transforms to a fully adjusted (FA) state at disturbed locations, as a consequence of the changes in its microstructure due to relative particle motions, microcracking or stiffening. The observed or average response of the material (a) is then expressed in terms of the responses of the material parts in the RI (i) and FA (c) states. The disturbance, D, acts as the coupling and interpolation mechanism between the RI and FA states so as to yield the observed response. The RI response can be characterised by using continuum models such as those based on elasticity, plasticity or elastoviscoplasticity. For instance, the RI response can be characterised by using the hierarchical single surface (HISS) plasticity models. For the associative case, the yield surface, F, in the HISS-δ0 model is given by:
(
)
F = J 2 D − − αJ 1n + γJ 12 (1 − β S r )
−0.5
=0
(6)
where J1 is the first invariant of the stress tensor, σij, J2D is the second invariant of the deviatoric stress tensor, Sij, Sr is the stress ratio defined as
27 ⋅ J 3 D J 2−D3 / 2 , and 2
J3D is the third invariant of Sij. The superior bar denotes a quantity non-dimensionalised with respect to atmospheric pressure (pa). J1 = J1 + 3R where 3R is the bonding (tensile or cohesive strength), γ and β are ultimate (failure) parameters, n is the phase change parameter associated with the state at which volume change from contraction to dilation occurs, and α is the hardening or growth function, which, in a simple form, is expressed as:
α= ∫ ( ) (dε )
where ξ = dε p
T
p
1/2
a1 ξ η1
(7)
is the trajectory of total plastic strains, and a1 and η1 are the hardening
parameters. The FA response can be characterised in different ways. If the FA (microcracked or damaged) material is assumed to carry no stress at all, i.e., it acts as a “void” as in the classical damage model, Equation (4) will specialise to that for the classical damage model (Kachanov, 1986). This characterisation is considered to be inappropriate, as it does not include the coupling between the RI and FA states. If the FA material is assumed to carry hydrostatic stress and no shear stress, its behaviour (Cc) can be characterised on the basis of its bulk response. In a general method, the FA response can be characterised by using the critical state concept (Roscoe et al., 1958), in which the FA material continues to carry the shear stress and deform in shear without change in its volume, under a given mean pressure. The critical state equations to characterise the FA response are given by (Roscoe et al., 1958; Desai, 1999):
J 2cD = m J 1c
(
e1c = e oc − λ ln J 1c / 3 p a
(8)
)
(9)
where e is the voids ratio and λ is a material parameter. 7.9.2
Disturbance
Disturbance is expressed as the ratio of the volume of the material in the FA state to the total volume. In a phenomenological context, D is expressed on the basis of observed response (e.g., stress-strain, volumetric or void ratio, effective stress or pore water pressure and nondestructive properties such as P- or S-wave velocities), and in terms of the deviatoric plastic strain trajectory or plastic work (Desai, 1999). For example: D=
σ i −σ a σ i −σ c
(10)
and Z D = Du 1 − e − Aξ D
where σ is an appropriate stress measure, i.e., σ1, σ1 - σ 3 or trajectory, and A, Z and Du are disturbance parameters. 7.9.3
(11)
J 2 D . ξD is the deviatoric plastic strain
Specialisations
Equation (4) includes various continuum and damage models as special cases. For example, if D = 0, the following results: dσ i = C i dε i
(12)
where C i can be based on elasticity, plasticity or elastoviscoplasticity theories. If the FA part cannot carry any stress at all, Equation (4) gives the classical damage model: dσ a = (1 − D )C i dε i − dDσ i
which does not include the effect of the microcracked or damage parts on the observed behaviour.
(13)
7.9.4
Interfaces And Joints
An interface or joint in a geological system constitutes a junction between two (similar or dissimilar) materials and represents a discontinuity where relative motions (slip, debonding or separation, rebonding and interpenetration) can occur. In order to allow for the relative motions, it becomes necessary to develop constitutive models based on appropriate and specialised laboratory tests. The force and kinematic conditions at the interface are introduced by developing various models: springs to represent normal and shear response, special joint elements to allow for relative displacements (Goodman et al., 1968; Ghaboussi et al., 1973) and force constraints (Katona, 1983). A number of models have been proposed as modifications or variations of the foregoing approaches. An alternative, called the thin-layer element approach, has been proposed (Desai et al., 1984). In this approach the “smeared” interface zone is treated as a “solid” element, and its stiffness properties are evaluated in the same manner as the other solid elements in the neighbouring regions. However, the material parameters are obtained from special shear tests, such as simple shear or direct shear tests (Desai and Rigby, 1997). The DSC can also be applied for the characterisation of interfaces and joints. The same mathematical framework, Equation (4), is applicable. As a result, the models for geological materials and interfaces are essentially the same. This eliminates the inconsistency in previous approaches in which different models were used for soils and interfaces. For example, if the HISS-δ0 model is used to define the RI behaviour, the yield function is given by (Desai and Fishman, 1991; Desai and Ma, 1992): F = τ 2 + ασ nn − γσ nq = 0
where τ and σ n are the shear and normal stresses in a twodimensional interface, respectively, γ is the ultimate or failure parameter, n is the phase change parameter, q allows a curved ultimate (failure) envelope, and α is the hardening function expressed in terms of the plastic relative shear and normal displacements. The disturbance is defined on the basis of results from tests involving the application of shear and normal loads. 7.9.5
Parameters
The material parameters in the DSC have physical meanings in that they are related to physical states during deformation. Their number is usually smaller than that in other available models of comparable capabilities. For instance, with two elastic (E, ν), and five plasticity (γ, β, a1, η1, R) parameters, the HISSδ0 model can include the effects of the factors previously mentioned, that are not present in the critical state models (with 6 parameters) and Cap models (with 10 parameters). The parameters in the DSC model can be found from laboratory
Figure 6 : Finite element mesh used for field pile test
(14)
Figure 7 : Comparison between field measurements and predictions from DSC and HISS models: one-way cyclic load tests uniaxial, shear, triaxial or multiaxial tests. The standard triaxial compression (and extension) test is suitable to quantify the parameters defining the elasticity, plasticity and disturbance. For viscous behaviour, creep or relaxation tests are required. Details of procedures for finding the parameters are available in various publications (e.g., Desai and Ma 1992; Desai et al., 1995; Desai, 1999). 7.9.6
Validations and Applications
The DSC model and its specialised versions have been calibrated with respect to test data for a wide range of geological materials and interfaces, and other materials such as concrete ceramics, metals and alloys (Desai, 1995, 1999; Desai et al., 1984, 1991, 1995, 1997, 1998; Desai and Salami, 1987; Desai and Fishman, 1991; Desai and Ma, 1992; Desai and Toth, 1996; Desai and Rigby, 1997). They have been validated with respect to tests used to find the parameters and independent tests not used in finding them. The DSC has been implemented in nonlinear finite element codes for static and dynamic analysis including dry and saturated materials. The codes are used to predict observed behaviour of a number of practical problems in geotechnical engineering. These include two- and three-dimensional analysis of footings and pile foundations, reinforced retaining walls, dams and excavations, underground works (tunnels), anchor-soil systems, landslides, consolidation problems, and seepage and liquefaction in shake table tests. Two typical examples are given below to illustrate some of the capabilities of these sophisticated models. 7.9.7
Example - Cyclic Analysis Of Piles In Marine Clay
Figure 6 shows the finite element mesh for a displacement-controlled field load test performed by Earth Technology Corp. (1986) on a 76.2 mm diameter pile segment. The numerical calculations involved simulation of the in situ stress conditions, driving effects, consolidation, tension tests and finally, axial cyclic loading (Shao and Desai 2000).
The material properties for the DSC with the HISS-δ0 model for the RI behaviour of the marine clay were determined from static and cyclic triaxial tests on cylindrical specimens and multiaxial tests on cubical specimens (Katti and Desai, 1995; Shao and Desai 2000; Wathugala and Desai, 1993). The parameters for the interface between the steel pile and the clay were determined from laboratory tests using the cyclic multi degree-of-freedom (CYMDOF) shear device, including measurements of pore water pressures. Details of the parameters are given elsewhere (Katti and Desai, 1995; Shao and Desai 2000; Wathugala and Desai, 1993). The finite element analyses were performed by using the DSC model and the kinematic hardening HISSδ0* model (Wathugala and Desai, 1993) implemented in the DSC-DYN2D computer code. The latter allows for cyclic degradation and interface effects. Figures 7 and 8 show comparisons between the computed results using the DSC and δ0* model and the observed data for one-way and two-way cyclic load tests, respectively. These figures also include the predicted growth of disturbance during the loading. It can be seen that the DSC model provides very good correlation with the observed results. Furthermore, it shows improved predictions compared to those from the HISS-δ0* plasticity model because the DSC model allows for cyclic degradation and interface effects.
Figure 8 : Comparison between field measurements and predictions from DSC and HISS models: two-way cyclic load tests 7.9.8
Example - Dynamic Analysis Of Shake Table Test And Liquefaction
Figure 9(a) shows the shake table test configuration (Akiyoshi et al., 1996). The applied loading involved specifying horizontal displacement, X, at the bottom nodes, given by the following function: X = u x sin (2π f t )
(15)
1700 1500
0 0 0 1 03
0 0 01
0 0 07 08
Sand
Shaking direction
Shaking machine
Accelerometer Pore water pressure meter Unit: mm (a) Shake Table Test Set-Up (Akiyoshi, et al., 1996)
100 Gradation Curve Fuji Sand Ottawa Sand
80
re ni F %
60
40
20
0 0.01
0.1
1
10
Particle size (mm) (b) Grain Size Distribution Curves of Ottawa Sand and Fuji River Sand Figure 5 Shake Table Setup and Grain Size Distributions for Sands Figure 9 : Shake table test setup and grain size distribution for sands
where ux is the amplitude (= 0.0013 m), f is the frequency (= 5 Hz) and t is time. Fuji river sand was used in the shake table test. Because various physical properties of the Fuji river sand and the Ottawa sand are similar, e.g., the grain-size distribution, Figure 9(b), the multiaxial cyclic test data for the Ottawa sand were used in the finite element analysis (Park and Desai 1999).
The finite element mesh is shown in Figure 10. The steel in the test box was assumed to be linear elastic, while the DSC model was used to represent the behaviour of the sand. The idea of the repeating side boundaries was employed, in which the displacements of the side boundary nodes on the same horizontal planes were assumed to be the same. The analysis used the computer code DSC-DYN2D. Figure 11 shows comparisons between the measured and computed excess pore water pressures Figure 10 : Finite element mesh used for simulation of shake table test with time, at the point (depth = 300 mm) shown as a solid dot in Figure 12. The test data indicate liquefaction after about 2.0 secs when the pore water pressure equals the initial effective stress. It can be seen from Figure 11 that the finite element predictions compare well with the test results. Figures 12(a) to (d) show growth of disturbance in the mesh at typical times = 0.5, 1.0, 2.0 and 10.0 secs, respectively; the plot of computed disturbance at depth = 300 mm is shown in Figure 12(e). Laboratory tests on the Ottawa sand showed that the liquefaction initiates at an average value of the critical disturbance, Dc = 0.84 (Desai et al., 1998). At time t = 0.5 sec, the computed disturbance in the sand is well below the critical value in all elements. At t = 1.0 sec, the disturbance has grown, and its value is between 0.50 and 0.70 in the Figure 11 : Excess pore pressures at a depth of 300 mm elements in the middle zone, which is below Dc = 0.84. At t = 2.0 secs, the disturbance has reached values higher than 0.80 at and below the depth of 300 mm; this indicates that liquefaction initiates at about 2.0 secs. At t=10secs, the disturbance in about 80% of the test box has grown to a value equal to or greater than the critical value, indicating that the soil has liquefied and failed.
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
(a) Time = 0.5 sec
(c) Time = 2.0 secs
(b) Time = 1.0 sec
(d) Time = 10.0 secs
1 0.9 0.8 0.7 ) D ( 0.6 ec na br 0.5 ut si 0.4 D 0.3 0.2 0.1 0
0
1
2
3
4
5 6 Time (sec)
7
8
9
10
(e) Disturbance vs. time at depth = 300mm
Figure 12 : Growth of the disturbed zone in sand Figure 8 Growth of Disturbance in Sand and at Depth = 300mm These and other results show that the DSC model can provide a fundamental and simplified approach for the evaluation of liquefaction potential (Desai, 2000; Desai et al., 1998).
7.10
Structured Soils
Much of the research on constitutive models conducted to date has focused on the behaviour of soil samples reconstituted in the laboratory. The emphasis on reconstituted materials arose because of both convenience and the need to maintain control over test and sample conditions, if models truly reflecting the response to changing applied stresses were to be developed. However, natural soils (and rock masses) differ significantly from laboratory prepared specimens, and these differences need to be accounted for if the developed models are to be applicable to materials found in nature. For example, there still exists a need to develop constitutive models that can take into account the structure and fabric of most natural materials. The important influence of structure on the mechanical properties of soil has long been recognised (e.g., Casagrande, 1932; Mitchell, 1976). The term “soil structure” is used here to mean the arrangement and bonding of the soil constituents, and for simplicity it encompasses all features of a soil that cause its mechanical behaviour to be different from that of the corresponding reconstituted soil. The removal of soil structure is referred to as destructuring, and destructuring is usually a progressive process. In recent years, there have been several studies in which a theoretical framework for describing the behaviour of structured soils has been formulated (e.g., Burland, 1990; Leroueil and Vaughan, 1990; Gens and Nova, 1993; Cotecchia and Chandler, 1997; Liu and Carter, 1999). It is rational to study the behaviour of natural soils by using knowledge of the corresponding reconstituted soils as a frame of reference (Burland, 1990, Liu and Carter, 1999). The disturbed state concept theory (DSC), described previously, provides a convenient framework to predict the difference in structured soil behaviour from that of the reconstituted soil, and it may also have the potential to describe the destructuring of a natural soil with loading. One such DSC model, suitable for isotropic and one-dimensional compression, has been formulated by Liu et al. (2000), based on a particular disturbance function. Simulations of the proposed DSC model for both structured soil and reconstituted soil have been made and compared to experimental data. An illustration of the accuracy of this method for incorporating the effects of soil structure is included here. The disturbed state concept allows flexibility in choosing ways to define the two reference states according to the practical problem of interest and the knowledge available, including its response in laboratory and field tests. A special selection of the reference states for quantifying the influence of soil structure on soil behaviour has been suggested by Liu et al. (2000). The fully adjusted state (FA) was chosen to be the corresponding reconstituted state. The fully adjusted state for the virgin compression of a structured soil is therefore based on the following two assumptions: (1) the material is reconstituted and has the same mineralogy as the structured soil, and (2) the soil is in a state of virgin compression and the stress state is the same as that applied to the structured soil. The relative intact state (RI) was chosen to be the “zero state”, i.e., the state with no response to stress (a perfectly rigid material). As indicated previously, a key step in building a DSC model lies in finding a disturbance function. Based on a study of a large body of experimental data on the compression behaviour of structured clays and other soils (e.g., Burland, 1990; Leroueil and Vaughan, 1990; Smith et al., 1992), the following disturbance function, Dεv, for the compression behaviour of naturally structured soils was proposed by Liu et al. (2000).
p ′y ,i Dεv = 1 + b p′
(16)
where b is the disturbance index for compression, p′ is the mean effective, and p′y,i represent the mean effective stress at which virgin yielding first occurs. Further details of the mathematical formulation of this approach can be found in Liu et al. (2000). An example of the application of the DSC to the destructuring of clay during one-dimensional compression is shown in Figure 13 for Winnipeg clay (after Graham and Li, 1985). Simulations using the DSC for Winnipeg clay and six other soils over a range of mean effective stresses from 10 kPa to 2,000 kPa have been made by Liu et al. (2000). For one soil (Mexico City clay) the magnitude of volumetric strain reached was as high as 120%. It was found that the proposed DSC model can describe successfully the compression behaviour of both naturally structured and artificially structured soils, no matter if the behaviour of the corresponding reconstituted soil is linear or non-linear in the e-lnp′ space. A complete
7.11
Which Constitutive Model Should Be Chosen?
25
Winnipeg clay s 'vy,I = 200 kPa 20
bv = 0.28
(%) Volumetric strain ev
three-dimensional model, capable of quantifying the shearing response of structured soil using the DSC is now required before this approach can be adopted in situations other than one-dimensional compression.
15
10
Structured Reconstituted
simul.
Structured Reconstituted
test
5
0 0
100
200
300
400
500
Vertical effective stress s ' v (kPa) Choosing a suitable constitutive model for any given Figure 13 : Comparison of DSC predictions and test results for a soil for use in the solution of a structured clay particular boundary value problem can prove problematical. Many theoretical models have been published in the literature and each has its own particular strengths and all have their limitations. Determining the key model parameters is often a major difficulty, either because some model parameters may have no real physical meaning, or inadequate test data may be available. Considerable experience and judgement are therefore required in order to make a sensible, meaningful selection of the most appropriate model and the most suitable values of its parameters. It is unlikely that a universal constitutive model, capable of providing accurate predictions of soil behaviour for all soil types, all loading and all drainage conditions, will be proposed, at least in the foreseeable future. Although the DSC seems to offer a framework for unifying many of these models, it seems likely, at least in the short to medium term, that progress will continue to be made through the development and use of models of limited applicability, but nevertheless capable of accurate predictions. Knowing the limitations of such models is as important as knowing their strengths.
7.11.1 Example – Foundations On Carbonate Sands The issue of selecting the most suitable stress-strain model and its parameter values comes into sharp focus for the case of shallow foundations resting on structured carbonate soils of the seabed. This is because natural samples of these materials are often lightly to moderately cemented and the degree of cementation can by highly variable. There are difficulties associated with obtaining high quality samples of these natural soils. In order to avoid these difficulties, and as a means of initiating meaningful studies of cemented soils, laboratory test samples have been prepared by artificially cementing reconstituted samples of carbonate sands recovered from the sea floor (e.g., Huang, 1994; Huang and Airey, 1998; Carter and Airey, 1994). The results of a sequence of related studies on artificially cemented calcareous soils (Huang, 1994; Yeoh, 1996; Pan, 1999) and attempts to model them numerically, are described briefly here. 7.11.2 Typical Behaviour The volumetric behaviour of the artificially cemented sands is illustrated in Figure 14. Typical behaviour during drained triaxial compression is indicated in Figures 15 and 16. Figure 14 includes results for samples of cemented and uncemented soils, prepared with different initial densities. All specimens exhibit consolidation behaviour resembling overconsolidated clay. Initially the response is stiff, but eventually at a sufficiently high stress level the rate of volume change with stress level exhibits a marked change and the samples become much more compressible. The transition is a result of breakdown in the cement bonding accompanied by some particle crushing and perhaps particle rearrangement. High volume compressibility is eventually exhibited by all specimens, and the presence of an initial cementation only slightly affects the position of the consolidation curve. Figures 15 and 16 indicate that the cemented material exhibits relatively brittle behaviour at low confining pressures, accompanied by a tendency to dilate. However, even at relatively moderate confining
7.11.3 Stress-Strain Models
2.2
Uncemented
Specific volume (v=1+e)
stresses shearing induces significant volume reduction in the specimens and the behaviour appears to be much more ductile, although there is still some brittleness due to cementation.
2.0
20% Cement 1.8
1.6
1.4
Volumetric strain (%)
Deviator stress (MPa)
A selection of constitutive models was investigated to 1.2 determine their suitability to represent the stress-strain 1.0 behaviour of the artificially 0.1 1 10 100 Mean effective stress (MPa) cemented carbonate soil in both “single element” (i.e., triaxial) Figure 14 : Isotropic compression of carbonate sands tests and boundary value problems (Islam, 1999). The models 3.5 considered are listed in Table 5, Effective confining stress (MPa) together with the number of 3.0 1.2 parameters required to describe 2.5 completely the constitutive behaviour. None employs the 2.0 DSC. 0.6 All models are elastoplastic 1.5 and involve either strain hardening 0.3 1.0 or softening. All have only one yield surface, except the 0.5 0.1 Molenkamp (1981) model (also referred to as “Monot”), that has 0.0 two. Some adopt associated 0 5 10 15 20 25 30 35 Axial strain (%) plastic flow, while others do not. None includes the effects of Figure 15 : Typical stress-strain behaviour of carbonate sand cementation directly by including a finite cohesion and tensile 12 strength. Rather, they incorporate Effective confining stress (MPa) 1.2 its effects indirectly by regarding a 10 cemented soil as an 8 overconsolidated material. The model developed by 6 Lagioia and Nova (1995) was 0.6 designed specifically for a 4 cemented carbonate soil, though 0.3 2 not the soil specifically considered here. The Molenkamp model has 0 been used previously in a 0.1 comprehensive study of the -2 0 5 10 15 20 25 30 35 behaviour of foundations on Axial strain (%) calcarenite on the North-West Shelf of Australia (Smith et al., Figure 16 : Typical volumetric behaviour of carbonate sand 1988), as well as in studies of other granular media (e.g., Hicks, 1992). Although formally it requires specification of a relatively large number of input parameters (23), many of these parameters may be assigned “standard” values.
The model labelled “SU2” is based very Table 5. Stress-strain models closely on the well-known Modified Cam Model Parameters Clay (MCC) model, but with one important Modified Cam Clay 5 distinction (Islam, 1999). It does not use the Lagioia & Nova 6 MCC ellipse as a yield surface, however it does use the original ellipse as a plastic SU2 6 potential. In SU2 the yield surface in p´-q Molenkamp 23 space is a flatter ellipse, since this shape better matches the experimental data for the uncemented carbonate sand. However, it does underpredict the peak strength of the cemented soil. The new flattened ellipse corresponds to an increased separation of the isotropic consolidation and critical state lines in voids ratio – effective stress space. Whereas in Modified Cam clay this separation corresponds to a ratio of mean effective stresses of 2, in the model SU2 a ratio of 5 has been found to fit the data for carbonate soils more accurately. These details are illustrated graphically in Figure 17. It is evident from this figure that the flow law resulting from the adoption of the MCC ellipse as a plastic potential produces good predictions of the volumetric response of both the cemented and uncemented soil. Values for the parameters of each model have been selected to provide a good fit to the triaxial test data. The fitting process was conducted for a comprehensive series of test results, using data from both drained and undrained tests, in order to obtain the best possible overall agreement between model predictions and measured behaviour. A typical comparison between the model predictions and the experimental data is given in Figures 18 and 19 for the case of a drained triaxial test conducted at a confining pressure of 300 kPa. Figure 18 indicates that most of the models considered predict the strength of the cemented soil at large strains and under-predict the shear strength at smaller values of axial strain. The models can reasonably predict the final “critical state” shear strength, but not the larger peak strengths due to the contribution of cementation. In this case the Modified Cam Clay model provides the best prediction of the stress-strain 1.2
Normalised deviatoric stress
Normalised deviatoric stress
1.2
Modified Cam Clay yield locus SU2 yield locus Critical State Line Experimental data
1.0 0.8
0.6
0.4 0.2
Modified Cam Clay yield locus SU2 yield locus Critical State Line Experimental data
1.0
0.8
0.6 0.4
0.2
Cemented soil
Uncemented soil 0.0
0.0 0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Normalized mean stress
2.0
2.0
1.0
1.0
Dilatancy
Dilatancy
Normalized mean stress
0.0
0.0
Cemented soil
Uncemented soil -1.0
-1.0 0.0
0.5
1.0
1.5
Stress ratio q/p'
2.0
2.5
3.0
0.0
0.5
1.0
1.5
Stress ratio q/p'
2.0
2.5
3.0
Figure 17 : Details of the model SU2 for uncemented (left) and artificially cemented (right) carbonate soils
1500
Deviator stress (kPa)
curve, consistent with the fact that it was also able to predict reasonably accurately the peak strengths (Figure 17). However, Figure 19 reveals that the MCC model is a poor predictor of the volumetric behaviour during drained shearing. It predicts dilation, while the test data indicate that the sample contracted during shearing. All other models provide reasonable predictions of the volume reduction. Clearly the adoption of an associated flow rule in the MCC model is inappropriate for this material.
1000
Molenkamp 500
Nova Sydney Uni. 2
Carbonate sand, North West shelf, Australia Triaxial test, Cell pressure : 300 kPa 3 Density : 13 kN/m , Cement content : 20 %
Experimental data
0 0
10
20
30
40
50
Axial strain (%)
Figure 18 : Comparison of stress-strain predictions
7.11.4 Model Footing Tests – Vertical Loading Volume strain (%)
5.0
Carbonate sand, North West shelf, Australia Triaxial test, Cell pressure : 300 kPa 3 Density : 13 kN/m , Cement content : 20 %
0.0
Modified Cam Clay Molenkamp Nova Sydney Uni. 2 Experimental data
Dilation
-5.0 0
10
20
30
40
50
Axial strain (%)
Figure 19 : Comparison of volumetric-strain predictions
12
Finite element simulation of axisymmetric model footing on artificially cemented carbonate soil
10
Bearing pressure (MPa)
Model footing tests have been carried out on 300 mm diameter samples of the artificially cemented carbonate sand described previously (Yeoh, 1996). An isotropic effective confining pressure of 300 kPa was applied to the sample and a 50 mm diameter rigid footing was pushed into its surface at a constant rate, slow enough for fully drained conditions to be achieved. A typical set of results from this series of tests is presented in Figure 20, which also indicates the model predictions obtained using the same set of model parameters obtained by fitting each model to the triaxial test data. It is evident that most models provide reasonable predictions of the performance of the footing throughout the entire range of footing displacements considered, i.e., up to a settlement equal to 30% of the footing diameter. However, the MCC model generally overpredicts the stiffness of the footing. This is not unexpected, since for this case MCC predicts dilation during shearing rather than the observed compression.
Modified Cam Clay
8
6
Modified Cam Clay Molenkamp Nova Sydney Uni. 2 Experimental data
4
2
0 0
7.11.5 Model Footing Tests – Inclined Loading
5
10
15
20
25
Normalised displacement d/B (%)
Figure 20 : Predictions of model footing tests
30
10000
Load inclination, degrees Predictions
Average traction (kPa)
Predictions of the SU2 model for cases of circular footings subjected to inclined loading (Islam, 1999) have also been compared with experimental measurements (Pan, 1999) made at 1g in the laboratory. These comparisons are shown in Figure 21, where satisfactory agreement can be observed over a large range of values of the normalised footing displacement.
10
10
5000
20
20
30
30
2500
0 0
5
10
15
20
25
30
Normalised displacement d/B (%)
7.11.6 Centrifuge Tests
Figure 21 : Circular footing on cemented sand 1800
Average bearing pressure q
Further validation of the SU2 model in predicting the behaviour of footings on carbonate sand is demonstrated in Figure 22 (Islam, 1999), where model predictions are compared with centrifuge test results obtained by Finnie (1993). These tests involved the vertical loading of a rigid circular footing resting on a finite layer of cemented carbonate sand, overlying carbonate silt. The failure mechanism in this case involved a punching mechanism with the region of cemented soil immediately under the footing penetrating the underling silt.
0
0
Data
7500
B : 4m t/B : 0.5
Strong crust
1200
Medium crust
Weak crust
600
Data Prediction 0 0
5
10
15
20
25
30
Normalised vertical displacement d/B (%)
Figure 22 : Circular footing on cemented layer over silt
7.11.7 Discussion It may be concluded from this study of stress-strain behaviour that the Modified Cam Clay model matches well the yield surface of the artificially cemented soil but is a poor predictor of the volume change that accompanies shearing. This has important consequences when the model is used in the prediction of a boundary value problem, as demonstrated. The other models considered in this study provide reasonable matches to the single element behaviour, although they are generally incapable of predicting accurately the peak strengths. However, these same models provide quite reasonable predictions of the behaviour of model footings, especially at very large footing displacements. This indicates that the peak strengths of the materials have only a small influence on the overall response of the footings. Excluding MCC, it would seem that there is little to choose between the models for use in analysing the behaviour of vertically loaded footings on this type of material. However, the model SU2 has the attraction of being closely based on the well-known MCC model, with only one simple but significant change, and it has relatively few parameters to be quantified, all of which have a clear physical meaning. Again, it is emphasised that none of the models considered here incorporates explicitly the effects of cementation or the break down of that cementation due to increasing isotropic and deviatoric stress. For cases where these factors may be important, different constitutive models will be required, and the disturbed state concept described previously, is one method that shows promise for capturing these effects. 8.0
SOME LIMITATIONS AND PITFALLS OF NUMERICAL ANALYSIS
8.1
Introduction
When using numerical analysis it is necessary to discretise the geometry of the boundary value problem into a finite number of sub-regions. For example, when using the finite element method the geometry is
divided into an assemblage of finite elements, whereas in the finite difference approach the geometry is represented by a grid of points. Approximations are then made as to how the primary variables, usually displacements, vary within these sub-regions. Clearly this introduces approximations and to obtain accurate results it is necessary to have sufficiently small sub-regions in areas where there are high gradients of stress and strain. This problem is fundamental to all numerical analysis and occurs no matter what constitutive model is being used, although the use of a nonlinear constitutive model can complicate the issue. The other main source of error involved in nonlinear numerical analysis is associated with the integration of the constitutive equations. Approximations must be made for this to be achieved and many solution strategies exist for performing this task. Other approximations that are involved in numerical analysis are those arising from the idealisations made when reducing the real problem to a form which can be analysed. This usually involves geometric approximations and idealisations as to material behaviour. A potential source of error is associated with a lack of ‘in depth’ understanding of the constitutive models employed to represent soil behaviour. This is a common source of error, due to the complexities of many of the constitutive models currently available. To illustrate the potential pitfalls associated with the above approximations a short discussion on nonlinear solution strategies and two common problems that occur with constitutive models are presented below. An in depth discussion of many of the restrictions and pitfalls involved in numerical analysis of geotechnical problems is given by Potts and Zdravkovic (1999, 2000). 8.2
Nonlinear Numerical Analysis
When analysing any boundary value problem, four basic solution requirements need to be satisfied: equilibrium, compatibility, constitutive behaviour and the boundary conditions. Nonlinearity introduced by the constitutive behaviour causes the governing equations to be reduced to an incremental form. For example for the finite element method the equations take the form:
[K G ]i {∆d }inG = {∆RG }i i
(17) i
where [K G] is the incremental global system stiffness matrix, {∆d} nG is the vector of incremental nodal displacements, {∆RG}i is the vector of incremental nodal forces and i is the increment number. To obtain a solution to a boundary value problem, the change in boundary conditions is applied in a series of increments and for each increment Equation (17) must be solved. The final solution is obtained by summing the results of each increment. Due to the nonlinear constitutive behaviour, the incremental global stiffness matrix [KG]i is dependent on the current stress and strain levels and therefore is not constant, but varies over an increment. Unless a very large number of small increments are used this variation should be accounted for. Hence, the solution of Equation (17) is not straightforward and different solution strategies exist. The objective of all such strategies is the solution of Equation (17), ensuring satisfaction of the four basic solution requirements listed above. This strategy is a key component of a nonlinear analysis, as it can strongly influence the accuracy of the results and the computer resources required to obtain them. Many different solution strategies exist but few comparative studies have been performed to establish their merits for geotechnical analysis. To illustrate the limitations and pitfalls that can arise three different categories (namely, tangent stiffness, visco-plastic and modified Newton-Raphson) of solution algorithm are considered and compared. 8.3
Tangent Stiffness Method
8.3.1
Introduction
The tangent stiffness method, sometimes called the variable stiffness method, is the simplest solution strategy. This is the method implemented in a variety of computer codes including CRISP (Britto and Gunn, 1987), which is widely used in engineering practice. In this approach, the incremental stiffness matrix [KG]i in Equation (17) is assumed to be constant over each increment and is calculated using the current stress state at the beginning of each increment. This is equivalent to making a piece-wise linear approximation to the nonlinear constitutive behaviour. To illustrate
the application of this approach, the simple problem of a uniaxially loaded bar of nonlinear material is considered, see Figure 23. If this bar is loaded, the true load displacement response is shown in Figure 24. This might represent the behaviour of a strain hardening plastic material which has a very small initial elastic domain. 8.3.2
Numerical Implementation
In the tangent stiffness approach the applied load is split into a sequence of increments. In Figure 24 three increments of load are shown as ∆ R1, ∆R2 and ∆R3. The analysis starts with the application of ∆R1. The incremental global stiffness matrix [KG]1 for this increment is evaluated based on the unstressed state of the bar corresponding to point ‘a’. For an elasto-plastic material this might be constructed using the elastic constitutive matrix [D]. Figure 23 : Uniaxial loading of a bar Equation (17) is then solved to determine the nodal displacements 1 {∆d} nG . As the material stiffness is assumed to remain constant, the load displacement curve follows the straight line ‘ab′’ on Figure 24. In reality, the stiffness of the material does not remain constant during this loading increment and the true solution is represented by the curved path ‘ab’. There is therefore an error in the predicted displacement equal to the distance ‘b′b’, however in the tangent stiffness approach this error is neglected. The second increment of load, ∆R2, is then applied, with the incremental global stiffness matrix [KG]2 evaluated using the stresses and strains appropriate to the end of increment 1, i.e., point ‘b′’ on Figure 24. Solution of Equation (17) then gives the nodal displacements {∆d}2n G . The load displacement curve follows the straight path ‘b′c′’ on Figure 24. This deviates further from the true solution, the error in the displacements now being equal to the Figure 24 : Application of the tangent stiffness distance ‘c′c’. A similar procedure now occurs algorithm to the uniaxial loading of a bar of when ∆R3 is applied. The stiffness matrix [KG]3 is nonlinear material evaluated using the stresses and strains appropriate to the end of increment 2, i.e., point ‘c′’ on Figure 24. The load displacement curve moves to point ‘d′’ and again drifts further from the true solution. Clearly, the accuracy of the solution depends on the size of the load increments. For example, if the increment size were reduced so that more increments were needed to reach the same accumulated load, the tangent stiffness solution would be nearer to the true solution. From the above simple example it may be concluded that in order to obtain accurate solutions to strongly nonlinear problems many small solution increments are required. The results obtained using this method can drift from the true solution and the stresses can fail to satisfy the constitutive relations. Thus the basic solution requirements may not be fulfilled. As shown later in this paper, the magnitude of the error is problem dependent and is affected by the degree of material nonlinearity, the geometry of the problem and the size of the solution increments used. Unfortunately, in general, it is impossible to predetermine the size of solution increment required to achieve an acceptable error. The tangent stiffness method can give particularly inaccurate results when soil behaviour changes from elastic to plastic or vice versa. For instance, if an element is in an elastic state at the beginning of an increment, it is assumed to behave elastically over the whole increment. This is incorrect if during the increment the behaviour becomes plastic and results in an illegal stress state that violates the constitutive model. Such illegal stress states can also occur for plastic elements if the increment size used is too large, for example a tensile stress state could be predicted for a constitutive model that cannot sustain tension.
This can be a major problem with critical state type models, such as modified Cam clay, which employ a vln p′ relationship (v = specific volume, p′ = mean effective stress), since a tensile value of p′ cannot be accommodated. In that case, either the analysis has to be aborted or the stress state has to be modified in some arbitrary way, which would cause the solution to violate the equilibrium condition and the constitutive model. 8.3.3
Uniform Compression Of A Mohr-Coulomb Soil
To illustrate some of the above deficiencies, drained one-dimensional loading of a soil element (i.e., an ideal oedometer test) is considered. This is shown graphically in Figure 25. Lateral movements are restrained and the soil sample is loaded vertically by specifying vertical movements along its top surface. No side friction is assumed and therefore the soil experiences uniform stresses Figure 25 : Uniform one-dimensional and strains. Consequently, to model this using finite compression elements, only a single element is needed. There is no discretisation error, the finite element program is essentially used only to integrate the constitutive model over the loading path. Firstly, it is assumed that the soil behaves according to a linear elastic perfectly plastic Mohr-Coulomb model with Young’s modulus, E′ = 10000kPa, Poisson’s ratio, ν = 0.2, cohesion, c′ = 0, angle of shearing resistance φ′ = 30o and angle of dilation, ψ = 30o. As the angles of dilation, ψ , and shearing resistance, φ′, are the same, the model is associated. For this analysis the yield function and plastic potential are given by: F ({σ ′}{ , k })=
J −1 = 0 c′ + p ′ g (θ ) tanφ ′
(18)
where g (θ )=
sin φ ′ , sin θ sin φ ′ cos θ + 3
(19)
mean effective stress p ′ = (σ 1′ + σ 2′ + σ 3′ )/ 3 ,
(20)
deviatoric stress J=
1 6
(σ 1′ − σ 2′ )2 + (σ 2′ − σ 3′ )2 + (σ 3′ − σ 1′ )2
,
(21)
Lode’s angle 1 (σ ′ − σ 3′ ) − 1 , θ = tan −1 2 2 3 (σ 1′ − σ 3′ )
(22)
and {k} is a vector of hardening or softening parameters, which in turn may depend on the state parameters. For this material the values of the vector {k} are all zero. It is also assumed that the soil sample has an initial isotropic stress σ v′ = σ h′ = 50kPa and that loading is always sufficiently slow to ensure drained conditions. Figure 26 shows the stress path in J-p′ space predicted by a tangent stiffness analysis in which equal increments of displacement were applied to the top of the sample. Each increment gave an incremental axial strain ∆εa = 3%. Also shown on the figure is the true solution. This was obtained by noting that initially the soil is elastic and that it only becomes elasto-plastic when it reaches the Mohr-Coulomb yield curve. In J-p′ space it can be shown that the elastic stress path is given by:
J=
3(1 − 2ν )
3(1 + ν )
(p ′ − p i′ )= 0.866(p ′ − 50)
(23)
The Mohr-Coulomb yield curve is given by Equation (18), which, with the parameter values listed above, gives: J = 0.693 p ′
(24)
Combining Equations (23) and (24) gives the stress state at which the stress path reaches the yield surface. This occurs when J = 173kPa and p′ = 250kPa. It can be shown that this occurs when the applied axial strain εa = 3.6%. Consequently, the true solution follows the path ‘abc’, where ‘ab’ is given by Equation (23) and ‘bc’ is given by Equation (24). Inspection of Figure 26 indicates a discrepancy between the tangent stiffness and the true solution, with the former lying above the latter and indicating a higher angle of shearing resistance, φ′ . The reason for this discrepancy can be explained as Figure 26 : Oedometer stress path predicted by the tangent follows. stiffness algorithm For the first increment of loading the material constitutive matrix is assumed to be elastic and the predicted stress path follows the path ‘ab′’. Because the applied incremental axial strain is only ∆ εa=3%, this is less than εa=3.6% which is required to bring the soil to yield at point ‘b’. As the soil is assumed to be linear elastic, the solution for this increment is therefore correct. For the second increment of loading the incremental global stiffness matrix [KG]2 is based on the stress state at the end of increment 1 (i.e. point ‘b′’). Since the soil is elastic here, the elastic constitutive matrix [D] is used again. The stress path now moves to point ‘c′’. As the applied strain εa = (∆εa1+∆εa2) = 6% is greater than εa = 3.6%, which is required to bring the soil to yield at point ‘b’, the stress state now lies above the Mohr-Coulomb yield surface. The tangent stiffness algorithm has overshot the yield surface. For increment three the algorithm realises the soil is plastic at point ‘c′’ and forms the incremental global stiffness matrix [K G]3 based on the elasto-plastic matrix, [D ep], consistent with the stress state at ‘c′’. The stress path then moves to point ‘d′’. For subsequent increments the algorithm uses the elasto-plastic constitutive matrix, [Dep], and traces the stress path ‘d′e′’. The reason why this part of the curve is straight, with an inclination greater than the correct solution, path ‘bc’, can be found by inspecting the elasto-plastic constitutive matrix, [D ep], defined by Equation (25). In this expression F is the yield surface, P is the plastic potential, {m} is a vector of state parameters, the values of which are immaterial, and the parameter A reflects the influence of hardening or softening on the incremental stress-strain response. For perfect plasticity A = 0. , m}) ∂F ({σ }{ , k }) [D] ∂P({σ }{ [D ] T
[D ]= [D]− ep
∂σ
∂σ
, k }) , m}) ∂F ({σ }{ ∂P({σ }{ [D] + A ∂ ∂ σ σ T
(25)
For the current model the elastic [D] matrix is constant, and as the yield and plastic potential functions are both assumed to be given by Equation (18), the variation of [D ep] depends on the values of the partial differentials of the yield function with respect to the stress components. In this respect, it can be shown that the gradient of the stress path in J-p′ space is given by:
∂F ({σ ′}{ , k }) ∂p ′ J = , k }) p ′ ∂F ({σ ′}{ ∂J
(26)
As this ratio is first evaluated at point ‘c′’, which is above the Mohr-Coulomb yield curve, the stress path sets off at the wrong gradient for increment three. This error remains for all subsequent increments. The error in the tangent stiffness approach can therefore be associated with the overshoot at increment 2. If the increment sizes had been selected such that at the end of an increment the stress path just reached the yield surface (i.e., at point ‘b’), the tangent stiffness algorithm would then give the correct solution. This is shown in Figure 27 where, in analysis labelled A, the first increment was selected Figure 27 : Effect of the first increment size on a tangent such that ∆ εa = 3.6%. After this increment stiffness prediction of an oedometer stress path the stress state was at point b, which is correct, and for increment 2 and subsequent increments the correct elastoplastic constitutive matrix [D ep] was used to obtain the incremental stiffness matrix [KG]i. As the matrix [D ep] remains constant along the stress path ‘bc’, the solution is independent of the size of the increments from point ‘b’ onwards. In analysis labelled B in Figure 27, a much larger first increment, ∆εa = 10%, was applied. This causes a large overshoot on the first increment and results in a significant divergence from the true solution even if the subsequent increments are reduced to 1%. As noted above, once the stress state has overshot, making subsequent increments smaller does not improve the solution. It can be concluded that, for this particular problem, the tangent stiffness algorithm is always in error, unless the increment size is such that at the end of an increment the stress state happens to be at point ‘b’. Because the solution to this simple one dimensional problem is known, it can be arranged for this to occur, as for analysis A in Figure 27. However, in general multi-axis boundary value problems, the answers to which are not known, it is impossible to choose the correct increment sizes so that overshoot never occurs. The only solution is to use a very large number of small load increments and hope for the best. Another source of error arising from the way the tangent stiffness method works is that the answers depend on the way the yield function is implemented. While it is perfectly acceptable, from a mathematical point of view, to write the yield surface in either of the forms shown in Equations (27) or (18), the predictions from the tangent stiffness algorithm will differ if, as is usually the case, overshoot occurs. c′ + p ′ g (θ )= 0 F ({σ ′}{ , k })= J − ′ tan φ
(27)
For the simple oedometer situation, this can be seen by calculating the partial differentials in Equation (26), for the yield function given by Equation (27). This gives: ∂F ({σ ′}{ , k }) ′ ∂p = g (θ ) ′ ∂ F k ( { }{ } ) σ , ∂J
(28)
As noted above, this equation gives the gradient of the resulting stress path in J-p′ space. (18) Whereas Equation (26) indicates that the inclination depends on the amount of overshoot, Equation (28) indicates that the inclination is constant and equal to the gradient of the Mohr(27) Coulomb yield curve. The two results are compared in Figure 28. The stress path based on the yield function written in the form of Equation (18) appears to pass through the origin of stress space, but to have an incorrect slope, indicating a value of φ′ that is too high, but the correct value of c′. In contrast, the stress path based on the yield Figure 28 : Effect of yield function implementation function written in the form of Equation (27) is on errors associated with the tangent stiffness parallel to the true solution, but does not pass algorithm through the origin of stress space, indicating that the material has a fictitious c′, but the correct φ′. Clearly, if there is no overshoot, both formulations give the same result, which for this problem agrees with the true solution. The reason for this inconsistency is that, in theory, the differentials of the yield function are only valid if the stress state is on the yield surface, i.e., F( {σ′},{k}) = 0. If it is not, it is then theoretically incorrect to use the differentials and inconsistencies will arise. The implications for practice are self-evident. Two different pieces of software which purport to use the same Mohr-Coulomb condition can give very different results, depending on the finer details of their implementation. This is clearly yet another draw back with the tangent stiffness algorithm for nonlinear analysis. The analysis labelled A in Figure 27 was performed with a first increment of axial strain of ∆εa = 3.6% and subsequent increments of ∆εa = 1%. As the first increment just brought the stress path to the yield surface, the results from this analysis are in agreement with the true solution. The situation is now considered where, after being loaded to point ‘c’, see Figure 27, the soil sample is unloaded with two increments of ∆ εa = -1%. The results of this analysis are shown in Figure 29. The predicted stress path on unloading is given by path ‘cde’, which indicates that the soil remains plastic and the stress path stays on the yield surface. This is clearly incorrect as such behaviour violates the basic postulates of elasto-plastic theory. When unloaded, the soil sample should become purely elastic, and the correct stress path is marked as path ‘cf’ on Figure 29. Because the soil has constant elastic parameters, this path is parallel to the initial Figure 29 : Example of an unloading stress path elastic loading path ‘ab’. The reason for the error using the tangent stiffness algorithm in the tangent stiffness analysis arises from the fact that when the first increment of unloading occurs, the stress state is plastic, i.e., point ‘c’. The algorithm does not “know” that unloading is going to occur, so when it forms the incremental global stiffness matrix, it uses the elasto-plastic constitutive matrix [Dep]. The result is that the stress path remains on the yield surface after application of the unloading increment. Since the soil is still on the yield surface, the same procedure occurs for the second increment of unloading. 8.3.4
Uniform Compression Of A Modified Cam Clay Soil
The above one-dimensional loading problem is now repeated with the soil represented by a simplified form of the modified Cam clay model. The soil parameters are listed in Table 6. Because a constant value of the critical state stress ratio in J-p′ space, M J, has been used, the yield (and plastic potential) surface plots as a circle in the deviatoric plane. A further simplification has been made for
the present analysis. Instead of Table 6. Properties for modified Cam clay model using the slope of the swelling line to calculate the elastic bulk 1.788 Specific volume at unit pressure on virgin modulus, constant elastic consolidation line, v1 parameters, E′ = 50000kPa and Slope of virgin consolidation line in v-ln p′ space, λ 0.066 ν = 0.26, have been used. This simplification has been made to 0.0077 Slope of swelling line in v-ln p′ space, κ be consistent with results 0.693 Slope of critical state line in J-p′ space, MJ presented in the next section of this paper. For the present investigation, it does not significantly affect soil behaviour and therefore any conclusions reached are valid for the full model. Again the initial stresses are σ v′ = σ h′ = 50kPa, and the soil is assumed to be normally consolidated. This later assumption implies that the initial isotropic stress state is on the yield surface. Three tangent stiffness analyses, with displacement controlled loading increments equivalent to ∆εa = 0.1%, ∆εa = 0.4% and ∆ εa = 1% respectively, have been performed. The predicted stress paths are shown in Figure 30. Also shown in this figure is the true solution. Consider the analysis with the smallest increment size, ∆εa = 0.1%. Apart from the very first increment the results of this analysis agree with the true solution. This is not so for the other two analyses. For the analysis with ∆εa = 0.4% the stress path is in considerable error for the first three increments. Subsequently the stress path is parallel to the true solution, however, there is still a substantial error. Matters are even worse for the analysis with the largest increment size, ∆εa = 1%. This has very large errors initially. The reason for the errors in these analyses is the same as that explained above for the Mohr-Coulomb analysis. That is the yield (and plastic potential) derivatives are evaluated in illegal stress space, i.e., with stress values which do not satisfy the yield (or plastic potential) function. This is mathematically wrong and leads to Figure 30 : Effect of increment size on the tangent stiffness incorrect elasto-plastic constitutive prediction of an oedometer stress path matrices. The reason why the errors are much greater than for the MohrCoulomb analyses is that the yield (and plastic potential) derivatives are not constant on the yield (or plastic potential) surface, as they are with the Mohr-Coulomb model, but vary. Matters are also not helped by the fact that the model is strain hardening/softening and, once the analysis goes wrong, incorrect plastic strains and hardening/softening parameters are subsequently calculated. The comments made above for the Mohr-Coulomb model on implementation of the yield function and on unloading also apply here. In fact, they apply to any constitutive model because they are caused by flaws in the tangent stiffness algorithm itself.
8.4
Visco-plastic Method
8.4.1
Introduction
This method uses the equations of visco-plastic behaviour and time as an artifice to calculate the behaviour of nonlinear, elastoplastic, time independent materials (Owen and Hinton, 1980; Zienkiewicz and Cormeau, 1974). The method was originally developed for linear elastic visco-plastic (i.e., time dependent) material behaviour. Such a material can be represented by a network of the simple rheological units shown in Figure 31. Each unit consists of an elastic and a visco-plastic component connected in series. The elastic component is represented by a spring and the visco-plastic component by a slider and dashpot connected in parallel. If a load is applied to the network, then one of two situations occurs in each individual unit. If the load is such that the induced stress in the unit does not cause yielding, the slider remains rigid and all the deformation occurs in the spring. This represents elastic behaviour. Alternatively, if the induced stress causes yielding, the slider becomes free and the dashpot is activated. As Figure 31 : Rheological model for the dashpot takes time to react, initially all deformation occurs in visco-plastic material the spring. However, with time the dashpot moves. The rate of movement of the dashpot depends on the stress it supports and its fluidity. With time progressing, the dashpot moves at a decreasing rate, because some of the stress the unit is carrying is dissipated to adjacent units in the network, which as a result suffer further movements themselves. This represents visco-plastic behaviour. Eventually, a stationary condition is reached where all the dashpots in the network stop moving and are no longer sustaining stresses. This occurs when the stress in each unit drops below the yield surface and the slider becomes rigid. The external load is now supported purely by the springs within the network, but, importantly, straining of the system has occurred not only due to compression or extension of the springs, but also due to movement of the dashpots. If the load was now removed, only the displacements (strains) occurring in the springs would be recoverable, the dashpot displacements (strains) being permanent. 8.4.2
Numerical Implementation
Application to finite element analysis of elasto-plastic materials can be summarised as follows. On application of a solution increment the system is assumed to instantaneously behave linear elastically. If the resulting stress state lies within the yield surface, the incremental behaviour is elastic and the calculated displacements are correct. If the resulting stress state violates yield, the stress state can only be sustained momentarily and visco-plastic straining occurs. The magnitude of the visco-plastic strain rate is determined by the value of the yield function, which is a measure of the degree by which the current stress state exceeds the yield condition. The visco-plastic strains increase with time, causing the material to relax with a reduction in the yield function and hence the visco-plastic strain rate. A marching technique is used to step forward in time until the visco-plastic strain rate is insignificant. At this point, the accumulated visco-plastic strain and the associated stress change are equal to the incremental plastic strain and stress change respectively. This process is illustrated for the simple problem of a uniaxially loaded bar of nonlinear material in Figure 32. For genuine visco-plastic materials the visco-plastic strain rate is given by:
{ }
F ({σ }{ ∂ ε vp , k}) P({σ }{ , m}) = γ f ∂t Fo ∂{σ }
(29)
where γ is the dashpot fluidity parameter and F o is a stress scalar to non-dimensionalise F({σ},{k}) (Zienkiewicz and Cormeau, 1974). When the method is applied to time independent elasto-plastic materials, both γ and Fo can be assumed to be unity (Griffiths, 1980).
In order to use the procedure described above, a suitable time step, ∆t, must be selected. If ∆t is small many iterations are required to obtain an accurate solution. However, if ∆t is too large numerical instability can occur. The most economical choice for ∆t is the largest value that can be tolerated without causing such instability. An estimate for this critical time step is suggested by Stolle and Higgins (1989). Due to its simplicity, the visco-plastic algorithm has been widely used. However, the method has severe limitations for geotechnical analysis. Firstly, the algorithm relies on the fact that for each increment the elastic parameters remain constant. The simple algorithm cannot accommodate elastic parameters that vary during the increment because, for such cases, it cannot determine the true elastic stress changes Figure 32 : Application of the visco-plastic algorithm associated with the incremental elastic strains. to the uniaxial loading of a bar of a nonlinear material The best that can be done is to use the elastic parameters associated with the accumulated stresses and strains at the beginning of the increment to calculate the elastic constitutive matrix, [D], and assume that this remains constant for the increment. Such a procedure only yields accurate results if the increments are small or the elastic nonlinearity is not great. A more severe limitation of the method arises when the algorithm is used as an artifice to solve problems involving non-viscous material (i.e., elasto-plastic materials). As noted above, the visco-plastic strains are calculated using Equations (29). In Equation (29) the partial differentials of the plastic potential are evaluated at an illegal stress state {σ}t, which lies outside the yield surface, i.e., F({σ′},{k})>0. As noted for the tangent stiffness method, this is theoretically incorrect and results in failure to satisfy the constitutive equations. The magnitude of the error depends on the constitutive model and in particular on how sensitive the partial derivatives are to the stress state. This is now illustrated by applying the visco-plastic algorithm to the onedimensional loading problem (i.e., ideal oedometer test) considered above for the tangent stiffness method. 8.4.3
Uniform Compression Of A Mohr-Coulomb Soil
Figure 33 : Oedometer stress path predicted by the visco-plastic algorithm
As with the tangent stiffness method, the problem shown graphically in Figure 25, with the soil properties given in the previous Mohr-Coulomb example is considered. Figure 33 shows the stress path in J-p′ space predicted by a viscoplastic analysis in which equal increments of vertical displacement were applied to the top of the sample. Each increment gave an axial strain ∆εa = 3% and therefore the predictions in Figure 33 are directly comparable to those for the tangent stiffness method given in Figure 26. The results were obtained using the critical time step. It can be seen that the visco-plastic predictions are in remarkably good agreement with the
true solution. Even when the increment size was doubled (i.e., ∆εa = 6%), the predictions did not change significantly. Due to the problem highlighted above, concerning evaluation of the plastic potential differentials in illegal stress space, there were some small differences, but these only caused changes in the fourth significant figure for both stress and plastic strains. Predictions were also insensitive to values of the time step as long as it was not greater than the critical value. The results were therefore not significantly dependent on either the solution increment size or the time step. The algorithm was also able to deal accurately with the change from purely elastic to elasto-plastic behaviour and vice versa. In these respects the algorithm behaved much better than the tangent stiffness method. It can therefore be concluded that the visco-plastic algorithm works well for this one-dimensional loading problem with the Mohr-Coulomb model. It has also been found that it works well for other boundary value problems, involving either the Tresca or the Mohr-Coulomb model. 8.4.4
Uniform Compression Of A Modified Cam Clay Soil
The one-dimensional loading problem was repeated with the soil represented by the simplified modified Cam clay model described previously. This model has linear elastic behaviour and therefore the problem of dealing with nonlinear elasticity is not relevant. In fact, it was because of this deficiency in the visco-plastic algorithm that the model was simplified. The soil properties are given in Table 6 and the initial conditions are discussed above. Results from four visco-plastic analyses, with displacement controlled loading increments equivalent to ∆εa = 0.01%, ∆εa = 0.1%, ∆εa = 0.4% and ∆εa = 1%, are compared with the true solution in Figure 34. The results have been obtained using the critical time step and the convergence criteria was set such that the iteration process stopped when there was no change in the fourth significant figure of the incremental stresses and incremental plastic strains. Only the solution with the smallest increment size (i.e., ∆εa = 0.01%) agrees with the true solution. It is instructive to compare these results with those given in Figure 30 for the tangent stiffness method. In view of the accuracy of the analysis with the Mohr-Coulomb model, it is, perhaps, surprising that the visco-plastic algorithm requires smaller increments than the tangent stiffness method to obtain an accurate solution. It is also of interest to note that when the increment size is too large, the tangent stiffness predictions lie above the true solution, whereas for the visco-plastic analyses the opposite occurs, with the predictions lying below the true solution. The visco-plastic solutions are particularly in error during the early stages of loading, see Figure 34b. To explain why the visco-plastic solutions are in error, consider the results shown in Figure 35. The true solution is marked as a dashed line on this plot. A visco-plastic analysis consisting of a single increment, equivalent to ∆εa = 1%, is performed starting from point ‘a’, which is on the true stress path. To do this in the analysis, the initial stresses are set appropriate to point ‘a’: σ v′ = 535.7kPa and σ h′ = 343.8kPa. This
Figure 34 : Effect of increment size on the visco-plastic prediction of an oedometer stress path loading increment should move the stress path from point ‘a’ to point ‘e’. The line ‘ae’ therefore represents
the true solution to which the visco-plastic analysis can be compared. However, the viscoplastic analysis actually moves the stress path from point ‘a’ to point ‘d’, thus incurring a substantial error. To see how such an error arises, the intermediate steps involved in the visco-plastic algorithm are plotted in Figure 35. These can be explained as follows. Initially, on the first iteration, the visco-plastic strains are zero and the stress change is assumed to be entirely elastic. This is represented by the stress state at point ‘b’. This stress state is used to evaluate the first contribution to the incremental viscoFigure 35 : A single increment of a visco-plastic plastic strains. These strains are therefore analysis based on the normal to the plastic potential function at ‘b’. This normal is shown on Figure 35 and should be compared to that shown for point ‘a’, which provides the correct solution. As the directions of the normals differ significantly, the resulting contribution to the visco-plastic strains is in error (note: along path ‘ae’ of the true solution, the normal to the plastic potential does not change significantly, being very similar to that at point ‘a’). This contribution to the visco-plastic strains is used to calculate a correction vector. They are also used to update the hardening/softening parameter for the constitutive model. A second iteration is performed which, due to the correction vector, gives different incremental displacements and incremental total strains. The incremental stresses are also now different as they depend on these new incremental total strains and the visco-plastic strains calculated for iteration 1. The stress state is now represented by point ‘c’. A second contribution to the visco-plastic strains is calculated based on the plastic potential at point ‘c’. Again, this is in error because this is an illegal stress state. The error is related to the difference in direction of the normals to the plastic potential surfaces at points ‘a’ and ‘c’. This second contribution to the incremental plastic strains is used to obtain an additional correction vector. A third iteration is then performed which brings the stress state to point ‘d’ on Figure 35. Subsequent iterations cause only very small changes to the visco-plastic strains and the incremental stresses and therefore the stress state remains at point ‘d’. At the end of the iterative process, the incremental plastic strains are equated to the visco-plastic strains. As the visco-plastic strains are the sum of the contributions obtained from each iteration, and as each of these contributions has been calculated using the incorrect plastic potential differentials (i.e., wrong direction of the normal), the incremental plastic strains are in error. This is evident from Figure 36, which compares the predicted and true incremental plastic strains. Since the hardening parameter for the model is calculated from the plastic strains, this is also incorrect. It is therefore not surprising that the algorithm ends up giving the wrong stress state represented by point ‘d’ in Figure 35. If the soil sample is unloaded at any stage, the analysis indicates elastic behaviour and therefore behaves correctly. It is concluded that for complex critical state constitutive models the visco-plastic algorithm can involve severe errors. The magnitude of these errors depends on the finer details of the model and, in particular, on how rapidly the plastic potential differentials vary with changes in stress state. The problems associated with Figure 36 : Comparison of incremental plastic strains the implementation of a particular constitutive from a single increment of a visco-plastic analysis and model, as discussed for the tangent stiffness the true solution method, also apply here. As the plastic strains are calculated from plastic potential differentials evaluated in illegal stress space, the answers depend on the
finer details of how the model is implemented in the software. Again, two pieces of software which purport to use the same equations could give different results. The above conclusion is perhaps surprising as the visco-plastic algorithm appears to work well for simple constitutive models of the Tresca and Mohr-Coulomb types. However, as noted previously, in these simpler models the differentials of the plastic potential do not vary by a great amount when the stress state moves into illegal stress space. 8.5
Modified Newton-Raphson Method
8.5.1
Introduction
The previous discussion of both the tangent stiffness and visco-plastic algorithms has demonstrated that errors can arise when the constitutive behaviour is based on illegal stress states. The modified NewtonRaphson (MNR) algorithm described in this section attempts to rectify this problem by only evaluating the constitutive behaviour in, or very near to, legal stress space. The MNR method uses an iterative technique to solve Equation (17). The first iteration is essentially the same as the tangent stiffness method. However, it is recognised that the solution is likely to be in error and the predicted incremental displacements are used to calculate the residual load, a measure of the error in the analysis. Equation (17) is then solved again with this residual load, {Ψ }, forming the incremental right hand side vector. Equation (17) can be rewritten as:
[K G ]i ({∆d }inG ) = {Ψ }j −1 j
(30)
The superscript ‘j’ refers to the iteration number and {Ψ }0 = {∆RG}i . This process is repeated until the residual load is small. The incremental displacements are equal to the sum of the iterative displacements. This approach is illustrated in Figure 37 for the simple problem of a uniaxially loaded bar of nonlinear material. In principle, the iterative scheme ensures that for each solution increment the analysis satisfies all solution requirements. A key step in this calculation process is determining the residual load vector. At the end of each iteration the current estimate of the incremental displacements is calculated and used to evaluate the incremental strains at each integration point. The constitutive model is then integrated along the incremental strain paths to obtain an estimate of the stress changes. These stress changes are added to the stresses at the beginning of the increment and used to evaluate consistent equivalent nodal forces. The difference between these forces and the externally applied loads (from the boundary Figure 37 : Application of the modified Newtonconditions) gives the residual load vector. A difference Raphson algorithm to the uniaxial loading of a arises because a constant incremental global stiffness bar of linear material matrix [KG]i is assumed over the increment. Due to the nonlinear material behaviour, [KG]i is not constant but varies with the incremental stress and strain changes. Since the constitutive behaviour changes over the increment, care must be taken when integrating the constitutive equations to obtain the stress change. Methods of performing this integration are termed stress point algorithms and both explicit and implicit approaches have been proposed in the literature. There are many of these algorithms in use and, as they control the accuracy of the final solution, users must verify the approach used in their software. Two of the most accurate stress point algorithms are described subsequently. The process described above is called a Newton-Raphson scheme if the incremental global stiffness matrix [KG]i is recalculated and inverted for each iteration, based on the latest estimate of the stresses and
strains obtained from the previous iteration. To reduce the amount of computation, the modified NewtonRaphson method only calculates and inverts the stiffness matrix at the beginning of the increment and uses it for all iterations within the increment. Sometimes the incremental global stiffness matrix is calculated using the elastic constitutive matrix, [D], rather than the elasto-plastic matrix, [D ep]. Clearly, there are several options here and many software packages allow the user to specify how the MNR algorithm should work. In addition, an acceleration technique is often applied during the iteration process (Thomas, 1984). 8.5.2
Stress Point Algorithms
Two classes of stress point algorithms are considered. The substepping algorithm is essentially explicit, whereas the return algorithm is implicit. In both the substepping and return algorithms, the objective is to integrate the constitutive equations along an incremental strain path. While the magnitudes of the strain increment are known, the manner in which they vary during the increment is not. It is therefore not possible to integrate the constitutive equations without making an additional assumption. Each stress point algorithm makes a different assumption and this influences the accuracy of the solution obtained. The schemes presented by Wissman and Hauck (1983) and Sloan (1987) are examples of substepping stress point algorithms. In this approach, the incremental strains are divided into a number of substeps. It is assumed that in each substep the strains {∆εss} are a proportion, ∆T, of the incremental strains {∆εinc}. This can be expressed as:
{∆ε ss }= ∆T {∆ε inc }
(31)
It should be noted that in each substep, the ratio between the strain components is the same as that for the incremental strains and hence the strains are said to vary proportionally over the increment. The constitutive equations are then integrated numerically over each substep using either an Euler, modified Euler or RungeKutta scheme. The size of each substep (i.e., ∆T) can vary and, in the more sophisticated schemes, is determined by setting an error tolerance on the numerical integration. This allows control of errors resulting from the numerical integration procedure and ensures that they are negligible. The basic assumption in these substepping approaches is therefore that the strains vary in a proportional manner over the increment. In some boundary value problems, this assumption is correct and consequently the solutions are extremely accurate. However, in general, this may not be true and an error can be introduced. The magnitude of the error is dependent on the size of the solution increment. The schemes presented by Borja and Lee (1990) and Borja (1991) are examples of one-step implicit type return algorithms. In this approach, the plastic strains over the increment are calculated from the stress conditions corresponding to the end of the increment. The problem, of course, is that these stress conditions are not known, hence the implicit nature of the scheme. Most formulations involve some form of elastic predictor to give a first estimate of the stress changes, coupled with a sophisticated iterative sub-algorithm to transfer from this stress state back to the yield surface. The objective of the iterative sub-algorithm is to ensure that, on convergence, the constitutive behaviour is satisfied, albeit with the assumption that the plastic strains over the increment are based on the plastic potential at the end of the increment. Many different iterative sub-algorithms have been proposed in the literature. In view of the findings presented previously, it is important that the final converged solution does not depend on quantities evaluated in illegal stress space. In this respect some of the earlier return algorithms broke this rule and are therefore inaccurate. The basic assumption in these approaches is therefore that the plastic strains over the increment can be calculated from the stress state at the end of the increment. This is theoretically incorrect as the plastic response, and in particular the plastic flow direction, is a function of the current stress state. The plastic flow direction should be consistent with the stress state at the beginning of the solution increment and should evolve as a function of the changing stress state, such that at the end of the increment it is consistent with the final stress state. This type of behaviour is exemplified by the substepping approach. If the plastic flow direction does not change over an increment, the return algorithm solutions are accurate. Invariably, however, this is not the case and an error is introduced. The magnitude of any error is dependent on the size of the solution increment. Potts and Ganendra (1994) performed a fundamental comparison of these two types of stress point algorithm. They conclude that both algorithms give accurate results, but, of the two, the substepping algorithm is better.
Another advantage of the substepping approach is that it is extremely robust and can easily deal with constitutive models in which two or more yield surfaces are active simultaneously and for which the elastic portion of the model is highly nonlinear. In fact, most of the software required to program the algorithm is common to any constitutive model. This is not so for the return algorithm, which, although in theory can accommodate such complex constitutive models, involves some extremely complicated mathematics. The software to deal with the algorithm is also constitutive model dependent. This means considerable effort is required to include a new or modified model. As the MNR method involves iterations for each solution increment, convergence criteria must be set. This usually involves setting limits to the size of both the iterative displacements, ({∆d}inG)j, and the residual loads, {Ψ }j. As both these quantities are vectors, it is normal to express their size in terms of the scalar norms.
Figure 38 : Oedometer stress paths predicted by the MNR algorithm: a) Mohr-Coulomb and b) modified Cam Clay models 8.5.3
Uniform Compression Of Mohr-Coulomb And Modified Cam Clay Soils
The MNR method using a substepping stress point algorithm has been used to analyse the simple one dimensional oedometer problem, considered previously for both the tangent stiffness and visco-plastic approaches. Results are presented in Figures 38a and 38b for the Mohr-Coulomb and modified Cam clay soils, respectively. To be consistent with the analyses performed with the tangent and visco-plastic algorithm, the analysis for the Mohr-Coulomb soil involved displacement increments which gave incremental strains ∆εa = 3%, whereas for the modified Cam clay analysis the increment size was equivalent to ∆εa = 1%. The predictions are in excellent agreement with the true solution. An unload-reload loop is shown in each figure, indicating that the MNR approach can accurately deal with changes in stress path direction. For the modified Cam clay analysis it should be noted that at the beginning of the test the soil sample was normally consolidated, with an isotropic stress p′ = 50kPa. The initial stress path is therefore elasto-plastic and not elastic. Consequently, it is not parallel to the unload/reload path. Additional analysis, performed with different sizes of solution increment, indicate that the predictions, for all practical purposes, are independent of increment size. These results clearly show that, for this simple problem, the MNR approach does not suffer from the inaccuracies inherent in both the tangent stiffness and visco-plastic approaches. To investigate how the different methods perform for more complex boundary value problems, a small parametric study has been performed, see Potts and Zdravkovic (1999). Some of the main findings of this study are presented next.
8.6
Comparison Of Solution Strategies
8.6.1
Introduction
A comparison of the three solution strategies presented above suggests the following. The tangent stiffness method is the simplest, but its accuracy is influenced by increment size. The accuracy of the viscoplastic approach is also influenced by increment size, if complex constitutive models are used. The MNR method is potentially the most accurate and is likely to be the least sensitive to increment size. However, considering the computer resources required for each solution increment, the MNR method is likely to be the most expensive, the tangent stiffness method the cheapest and the visco-plastic method is probably somewhere in between. It may be possible though, to use larger and therefore fewer increments with the MNR method to obtain a similar accuracy. Thus, it is not obvious which solution strategy is the most economic for a particular solution accuracy. All three solution algorithms have been incorporated into the single computer program, ICFEP. Consequently, much of the computer code is common to all analyses and any difference in the results can be attributed to the different solution strategies. The code has been extensively tested against available analytical solutions and with other computer codes, where applicable. The program was used to compare the relative performance of each of the three schemes in the analysis of two simple idealised laboratory tests and three more complex boundary value problems. Analyses of the laboratory tests were carried out using a single four-noded isoparametric element with a single integration point, whereas eight-noded isoparametric elements, with reduced integration, were employed for the analyses of the boundary value problems. As already shown, the errors in the solution algorithms are more pronounced for critical state type models than for the simpler linear elastic perfectly plastic models (i.e., Mohr-Coulomb and Tresca). Hence, in the comparative study the soil has been modelled with the modified Cam clay model. To account for the nonlinear elasticity that is present in this model, the visco-plastic algorithm was modified to incorporate an additional stress correction based on an explicit stress point algorithm, similar to that used in the MNR method, at each time step, see Potts and Zdravkovic (1999). 8.6.2
Ideal Drained Triaxial Test
Table 7. Modified Cam clay properties for drained triaxial tests Idealised drained triaxial Overconsolidation ratio 1.0 compression tests were considered. A cylindrical Specific volume at unit pressure on virgin consolidation 1.788 sample was assumed to be line, v1 isotropically normally Slope of virgin consolidation line in v-lnp′ space, λ 0.066 consolidated to a mean effective Slope of swelling line in v-lnp′ space, κ 0.0077 stress, p′ = 200kPa, with zero pore water pressure. The soil Slope of critical state line in J-p′ space, MJ 0.693 parameters used for the analyses Elastic shear modulus, G/Preconsolidation pressure, p0′ 100 are shown in Table 7. Increments of compressive axial strain were applied to the sample until the axial strain reached 20%, while maintaining a constant radial stress and zero pore water pressure. The results are presented as plots of volumetric strain and deviatoric stress, q, versus axial strain. Deviatoric stress q is defined as:
[
1 (σ 1′ − σ 2′ )2 + (σ 2′ − σ 3′ )2 + (σ 3′ − σ 1′ )2 2 = σ 1′ − σ 3′ for triaxial conditions
q = 3J =
]
(32)
Results of these analyses are presented in Figures 39, 40 and 41. The label associated with each line in these plots indicates the magnitude of axial strain applied at each increment of that analysis. The tests were deemed ideal as the end effects at the top and bottom of the sample were considered negligible and the stress and strain conditions were uniform throughout. Analytical solutions are also provided on the plots for comparison purposes.
Figure 39 : Modified Newton-Raphson: Drained triaxial test
Results from the MNR analyses are compared with the analytical solution in Figure 39. The results are not sensitive to increment size and agree well with the analytical solution. The tangent stiffness results are presented in Figure 40. The results are sensitive to increment size, giving very large errors for the larger increment sizes. The deviatoric stress, q at failure (20% axial strain) is over predicted. The results of the visco-plastic analyses are shown in Figure 41. Inspection of this figure indicates that the solution is also sensitive to increment size. Even the results from the analyses with the smallest increment size of 0.1% are in considerable error. It is of interest to note that results from the tangent stiffness analyses over predict the deviatoric stress at any particular value of axial strain. The opposite is true for the visco-plastic analysis, where q is under predicted in all cases. This is similar to the observations made earlier for the simple oedometer test. 8.6.3
Figure 40 : Tangent stiffness: Drained triaxial test
Figure 41 : Visco-plastic: Drained triaxial test
Footing Problem
A smooth rigid strip footing subjected to vertical loading, as depicted in Figure 42, has been analysed. The same soil constitutive model and parameters as used for the idealised triaxial test analyses, see Table 7, have been employed to model the soil which, in this case, was assumed to behave undrained. The finite element mesh is shown in Figure 43. Note that due to symmetry about the vertical line through the centre of the footing, only half of the problem needs to be considered in the finite element analysis. Plane strain conditions are assumed. Before loading the footing, the coefficient of earth pressure at rest, Ko, was assumed to be unity and the vertical effective stress and pore water pressure were calculated using a saturated bulk unit weight of the soil of 20kN/m3 and a static water table at the ground surface. The footing
Figure 42 : Geometry of footing problem
Figure 43 : Finite element mesh for footing analysis
was loaded by applying a series of equally sized increments of vertical displacement until the total displacement was 25mm. The load-displacement curves for the tangent stiffness, visco-plastic and MNR analyses are presented in Figure 44. For the MNR method, analyses were performed using 1, 2, 5, 10, 25, 50 and 500 increments to reach a footing settlement of 25mm. With the exception of the analysis performed with only a single increment, all analyses gave very similar results and plot as a single curve, marked MNR on this figure. The MNR results are therefore insensitive to increment size and show a well-defined collapse load of 2.8kN/m. For the tangent stiffness approach, analyses using 25, 50, 100, 200, 500 and 1000 increments have been carried out. Analyses with a smaller number of increments were also attempted, but illegal stresses (negative mean effective stresses, p′) were predicted. As the constitutive model is not defined for such stresses, the analyses had to be aborted. Some finite element packages overcome this problem by arbitrarily resetting the offending negative p′ values. There is no theoretical basis for this and it leads to violation of both the equilibrium and the constitutive conditions. Although such adjustments enable an analysis to be completed, the final solution is in error. Results from the tangent stiffness analyses are shown in Figure 44. When plotted, the curve from the analysis with 1000 increments is indistinguishable from those of the MNR analyses. The tangent stiffness results are strongly influenced by increment size, with the ultimate footing load decreasing from 7.5kN/m to 2.8kN/m with reduction in the size of the applied displacement increment. There is also a tendency for the loaddisplacement curve to continue to rise and not reach a welldefined ultimate failure load for the analysis with large applied displacement increments. The results are unconservative, over predicting the ultimate footing load. There is also no indication from the shape of the tangent stiffness load-displacement curves as to whether the solution is accurate, since all the curves have similar shapes. Figure 44 : Footing load-displacement curves
Visco-plastic analyses with 10, 25, 50, 100 and 500 increments were performed. The 10 increment analysis had convergence problems in the iteration process, which would initially converge, but then diverge. Similar behaviour was encountered for analyses using still fewer increments. Results from the analyses with 25 and 500 increments are shown in Figure 44. The solutions are sensitive to increment size, but to a lesser degree than the tangent stiffness approach. The load on the footing at a settlement of 25mm is plotted against number of increments, for all tangent stiffness, visco-plastic and MNR analyses, in Figure 45. The insensitivity of the MNR analyses to increment size is clearly shown. In these analyses the ultimate footing load only changed from 2.83kN/m to 2.79kN/m as the number of Figure 45 : Ultimate footing load against number of increments increased from 2 to 500. Even for the increments MNR analyses performed with a single increment, the resulting ultimate footing load of 3.13kN/m is still reasonable and is more accurate than the value of 3.67kN/m obtained from the tangent stiffness analysis with 200 increments. Both the tangent stiffness and visco-plastic analyses produce ultimate failure loads which approach 2.79kN/m as the number of increments increase. However, tangent stiffness analyses approach this value from above and therefore over predict, while visco-plastic analyses approach this value from below and therefore under predict. This trend is consistent with the results from the triaxial test, where the ultimate value of q was over predicted by the tangent stiffness and under predicted by the visco-plastic approach. It can be shown that, for the material properties and initial stress conditions adopted, the undrained strength of the soil, su, varies linearly with depth below the ground surface. Davis and Booker (1973) provide approximate solutions for the bearing capacity of footings on soils with such an undrained strength profile. For the present situation their charts give a collapse load of 1.91kN/m and it can be seen (Figure 45) that all the finite element predictions exceed this value. This occurs because the analytical failure zone is very localised near the soil surface. Further analyses have been carried out using a refined mesh in which the thickness of the elements immediately below the footing has been reduced from 0.1m to 0.03m. The MNR analyses with this mesh predict an ultimate load of 2.1kN/m. If the mesh is further refined the Davis and Booker solution will be recovered. For this refined mesh even smaller applied displacement increment sizes were required for the tangent stiffness analyses to obtain an accurate solution. Analyses with an increment size of 0.125mm displacement (equivalent to the analysis with 200 increments described above) or greater, yielded negative values of p′ in the elements below the corner of the footing and therefore these analyses could not be completed. 8.6.4
Comments
Results from the tangent stiffness analyses of both the idealised triaxial tests and the more complex boundary value problems are strongly dependent on increment size. The error associated with the tangent stiffness analyses usually results in unconservative predictions of failure loads and displacements in most geotechnical problems. For the footing problem large over predictions of failure loads are obtained, unless a very large number of increments (≥1000) is employed. Inaccurate analyses based on too large an increment size produced ostensible plausible load displacement curves. Analytical solutions are not available for most problems requiring a finite element analysis. Therefore it is difficult to judge whether a tangent stiffness analysis is accurate on the basis of its results. Several analyses must be carried out using different increment sizes to establish the likely accuracy of any predictions. This could be a very costly exercise, especially if there was little experience in the problem being analysed and no indication of the optimum increment size. Results from the visco-plastic analyses are also dependent on increment size. For boundary value problems involving undrained soil behaviour these analyses were more accurate than tangent stiffness analyses with the same increment size. However, if soil behaviour was drained, visco-plastic analyses were
only accurate if many small solution increments were used. In general, the visco-plastic analyses used more computer resources than both the tangent stiffness and MNR approaches. For the triaxial tests, footing and pile problems (not presented here), the visco-plastic analyses under-predicted failure loads if insufficient increments were used and were therefore conservative in this context. For both the tangent stiffness and visco-plastic analyses the number of increments to obtain an accurate solution is problem dependent. For example, for the footing problem the tangent stiffness approach required over 1000 increments and the visco-plastic method over 500 increments, whereas for an excavation problem (not presented here) the former required only 100 and the later 10 increments. Close inspection of the results from the visco-plastic and tangent stiffness analyses indicated that a major reason for their poor performance was their failure to satisfy the constitutive laws. This problem is largely eliminated in the MNR approach, where a much tighter constraint on the constitutive conditions is enforced. The results from the MNR analyses are accurate and essentially independent of increment size. For the boundary value problems considered, the tangent stiffness method required considerably more CPU time than the MNR method to obtain results of similar accuracy, e.g., over seven times more for the foundation problem and over three and a half times more for the excavation problem. Similar comparisons can be found between the MNR and visco-plastic solutions. Thus not withstanding the potentially very large computer resources required to find the optimum tangent stiffness or visco-plastic increment size, the tangent stiffness or visco-plastic method with an optimum increment size is still likely to require more computer resources than an MNR analysis of the same accuracy. Though it may be possible to obtain tangent stiffness or visco-plastic results using less computer resources than with the MNR approach, this is usually at the expense of the accuracy of the results. Alternatively, for a given amount of computing resources, a MNR analysis produces a more accurate solution than either the tangent stiffness or viscoplastic approaches. The study has shown that the MNR method appears to be the most efficient solution strategy for obtaining an accurate solution to problems using critical state type constitutive models for soil behaviour. The large errors in the results from the tangent stiffness and visco-plastic algorithms in the present study emphasise the importance of checking the sensitivity of the results of any finite element analysis to increment size. 8.7
Using The Mohr-Coulomb Model In Constrained Problems
The Mohr Coulomb model can be used with a dilation angle ranging from ψ = 0 to ψ = φ'. This parameter controls the magnitude of the plastic dilation (plastic volume expansion) and remains constant once the stress state of the soil is on the yield surface. This implies that the soil will continue to dilate indefinitely if shearing continues. Clearly such behaviour is not realistic, as most soils will eventually reach a critical state condition, after which they will deform at constant volume if sheared any further. While such unrealistic behaviour does not have a great influence on boundary value problems that are unrestrained (e.g., the drained surface footing problem), it can have a major effect on problems which are constrained (e.g., drained cavity expansion, end bearing of a deeply embedded pile), due to the restrictions on volume change imposed by the boundary conditions. In particular, unexpected results can be obtained in undrained analysis in which there is a severe constraint imposed by the zero total volume change restriction associated with undrained soil behaviour. To illustrate this problem two examples will now be presented. The first example considers ideal (no end effects) undrained triaxial compression (∆σv > 0, ∆σh = 0) tests on a linear elastic Mohr Coulomb plastic soil with parameters E' = 10000kPa, ν = 0.3, c' = 0, φ' = 24o. As there are no end effects, a single finite element is used to model the triaxial test with the appropriate boundary conditions. The samples were assumed to be initially isotopically consolidated with p' = 200kPa and zero pore water pressure. A series of finite element runs were then made, each with a different angle of dilation, ψ, in which the samples were sheared undrained. Undrained conditions were enforced by setting the bulk modulus of water to be 1000 times larger than the effective elastic bulk modulus of the soil skeleton, K'. The results are shown in Figure 46a and 46b in the form of J-p' and J-εz plots. It can be seen that in terms of J-p' all analyses follow the same stress path. However, the rate at which the stress state moves up the Mohr Coulomb failure line differs for each analysis. This can be seen from Figure 46b. The analysis with zero plastic dilation, ψ = 0, remains at a constant J and p' when it reaches the failure line. However, all other analyses move up the failure line, with those with the larger dilation moving up more
Figure 46 : Prediction of dilation a) stress paths and b) stress-strain curves in triaxial compression, using Mohr-Coulomb model with different angles (Note: ν ≡ ψ in this figure) rapidly. They continue to move up this failure line indefinitely with continued shearing. Consequently, the only analysis that indicates failure (i.e., a limiting value of J) is the analysis performed with zero plastic dilation. The second example considers the ψ undrained loading of a smooth rigid strip footing. The soil was assumed to have the same parameters as those used for the triaxial tests above. The initial stresses in the soil were calculated on the basis of a saturated bulk unit weight of 20kN/m3, a ground water table at the soil surface and a Ko = 1-sinφ'. The footing was loaded by applying increments of vertical displacement and ψ undrained conditions were again enforced by setting the bulk modulus of the pore water to be 1000 times K'. The results of two analyses, one with ψ = 0o and the other with ψ = φ', are shown in Figure 47. The difference is quite Figure 47 : Load-displacement curves for strip footing, staggering: while the analysis with ψ = 0o Mohr Coulomb model with different angles of dilation reaches a limit load, the analysis with ψ = φ' shows a continuing increase in load with displacement. As with the triaxial tests, a limit load is only obtained if ψ = 0o. It can be concluded from these two examples that a limit load will only be obtained if ψ = 0o. Consequently great care must be exercised when using the Mohr Coulomb model in undrained analysis. It could be argued that the model should not be used with ψ > 0 for such analysis. However, reality is not that simple and often a finite element analysis involves both an undrained and a drained phase (i.e., undrained excavation followed by drained dissipation). Consequently it may be necessary to adjust the value of ψ between the two phases of the analysis. Alternatively, a more complex constitutive model which better represents soil behaviour may have to be employed.
8.8
Influence Of The Shape Of The Yield And Plastic Potential Surfaces
As noted by Potts and Gens (1984) and Potts and Zdravkovic (1999) the shape of the plastic potential in the deviatoric plane can affect the Lode’s angle θ at failure in plane strain analyses. This implies that it will affect the value of the soil strength that can be mobilised. In many commercial software packages, the user has little control over the shape of the plastic potential and it is therefore important that its implications are understood. This phenomenon is investigated by considering the modified Camclay constitutive model. Many software packages assume that both the yield and plastic potential surfaces plot as circles in the deviatoric plane. This is defined by specifying a constant value of the parameter M J (i.e., the slope of the critical state line in J-p′ stress space). Such an assumption implies that the angle of shearing resistance, φ', varies with the By equating MJ to the Lode’s angle, θ . expression for g(θ ) given by Equation (19) and re-arranging, gives the following expression for φ' Figure 48 : Variation of φ′ with θ for constant MJ in terms of MJ and θ: cos θ − 1 φ ′ = sin M J M J sin θ 1 − 3
(33) From this equation it is possible to express MJ in terms of the angle of shearing resistance, φ'TC, in triaxial compression (θ = -30o):
′ )= M J (φ TC
′ 2 3 sin φ TC ′ 3 − sin φ TC
(34)
In Figure 48 the variation of φ' with θ, given by Equation (33), are plotted for three values of MJ. The values of MJ have been determined from Equation (34) using φ'TC = 20o, 25 o and 30o. If the plastic potential is circular in the deviatoric plane, it can be shown that plane strain failure occurs when the Lode's angle θ = 0o. Inspection of Figure 48 indicates that for all values of MJ there is a large change in φ' with θ. For example if φ' is set to give φ'TC = 25o, then under plane strain conditions the mobilised φ' value is φ'PS = 34.6o. This difference is considerable and much larger than indicated by careful laboratory testing. The differences between φ'TC and φ 'PS becomes greater the larger the value of MJ. The effect of θ on undrained strength, s u, for the constant MJ formulation is shown in Figure 49. The variation has been calculated for OCR = 1, g(θ ) = MJ, Ko = 1-sinφ'TC and κ/λ = 0.1. The equivalent variation Figure 49 : Effect of θ on su, for the constant MJ formulation based on the formulation which
assumes a constant φ', instead of a constant M J, is given in Figure 50. This is also based on the above parameters, except that g(θ ) is now given by Equation (18). The variation shown in this figure is in much better agreement with the available experimental data than the trends shown in Figure 49. To investigate the effect of the plastic potential in a boundary value problem two analyses of a rough rigid strip footing have been performed. The finite element mesh was similar to the one shown in Figure 43. The modified Cam clay model was used to represent the soil which had the following material parameters, OCR = 6, v1 = 2.848, λ = 0.161, κ = 0.0322 and ν = 0.2. In one analysis the yield and plastic potential surfaces were assumed to be circular in the deviatoric plane. A value of MJ = 0.5187 was used for this analysis, this is equivalent to φ'TC = 23o. In the second analysis a Figure 50 : Effect of θ on su, for the constant φ′ formulation constant value of φ' = 23o was used giving a Mohr Coulomb hexagon for the yield surface in the deviatoric plane. However, the plastic potential still gave a circle in the deviatoric plane and therefore plane strain failure occurred at θ = 0o, as for the first analysis. In both analysis the initial stress conditions in the soil were based on a saturated bulk unit weight of 18kN/m3, a ground water table at a depth of 2.5m and a Ko = 1.227. Above the ground water table the soil was assumed to be saturated and able to sustain pore water pressure suctions. Coupled consolidation analyses were performed but the permeability and time steps were chosen such that undrained Figure 51 : Load-displacement curves for two different approaches conditions occurred. Loading of the footing was simulated by imposing increments of vertical displacement. In summary, the input to both analysis is identical, except that in the first, the strength parameter MJ is specified, whereas in the second, φ' is input. In both analyses φ 'TC = 23o and therefore any analyses in triaxial compression would give identical results. However, the strip footing problem is plane strain and therefore differences are expected. The resulting load displacement curves are given in Figure 51. The analysis with a constant MJ gave a collapse load some 58% larger than the analysis with a constant φ'. The implications for practice are clear, if a user is not aware of the plastic potential problem and is not fully conversant with the constitutive model implemented in the software being used, they could easily base the input on φ'TC = 23o. If the model uses a constant MJ formulation, this would then imply a φ'PS = 31.2o, which in turn leads to a large error in the prediction of any collapse load.
9.0
VALIDATION AND CALIBRATION OF COMPUTER SIMULATIONS To date, less attention has been paid in the literature to the important issue of validation and reliability of numerical models in general, and specific software in particular, than has been paid to the development of the methods themselves. The work by Schweiger (1991) is one of the limited studies on the subject of model validation. There is now a strong need to define procedures and guidelines to arrive at reliable numerical methods and, more importantly, input parameters which represent accurately the strength and stiffness properties of the ground in situ. As should be evident from previous discussion, benchmarking is of great importance in geotechnical engineering, probably more so than in other engineering disciplines, such as structural engineering. Reasons for the importance of benchmarking may be summarized as follows: • the domain to be analysed is often not clearly defined by the structure, • it is not always clear whether continuum or discontinuum models are more appropriate for the problem at hand, • a wide variety of constitutive models exists in the literature, but there is no “approved” model for each type of soil, • in most cases construction details cannot be modelled very accurately in time and space (e.g., the excavation sequence, pre-stressing of anchors, etc.), at least not from a practical point of view, • soil-structure interaction is often important and may lead to numerical problems (e.g., with certain types of interface elements), • the implementation details and solution procedures may have a significant influence on the results of certain problems, but may not be important for others, and • there are no approved implementation and solution procedures for commercial codes (such as implicit versus explicit solution strategies, return algorithms, etc.). Obviously, there is currently considerable scope for developers and users of numerical models to exercise their personal preferences when tackling geotechnical problems. From a practical point of view, it is therefore very difficult to prove the validity of many calculated results because of the numerous modelling assumptions required. So far, no clear guidelines exist, and thus results for a particular problem may vary significantly if analysed by different users, even for reasonably well-defined working load conditions. Difficult issues such as these have been addressed by various groups, including a working group of the German Society for Geotechnics, viz., “1.6 Numerical Methods in Geotechnics”. It is the aim of this group to provide recommendations for numerical analyses in geotechnical engineering. So far the group has published general recommendations (Meissner, 1991), recommendations for numerical simulations in tunnelling (Meissner, 1996) and it is expected that recommendations for deep excavations will also be published shortly. In addition, benchmark examples have been specified and the results obtained by various users employing different software have been compared. Some of the work presented by this group is summarised here. The efforts of the working group may be seen as a first step towards greater objectivity in numerical analyses of geotechnical problems in practice. To date, three example problems have been specified by the working group, and these have been discussed in two separate workshops. The first two examples, involving a tunnel excavation and a deep open excavation problem, have been rather idealised problems, with a very tight specification so that little room for interpretation was left to the analysts. Despite the simplicity of the examples and the rather strict specifications, significant differences in the results were obtained, even in cases where the same software has been utilised by different users. Further details of these examples are given below. The third example, which represents an actual application (a tied back diaphragm wall in Berlin sand), which was only slightly modified in order to reduce the computational effort, will also be presented. Limited field measurements are available for this example, providing information on the order of magnitude of the deformations to be expected. Whereas in the first two examples, the constitutive model and the parameters were pre-specified, in the latter example the choice of constitutive model has been left to the user and the parameter values had to be selected either from the literature, on the basis of personal experience, or determined from laboratory tests which were made available to the analysts.
9.1
Specifications For Benchmark Examples
Keeping in mind the purpose of benchmarking, from a practical point of view the following requirements for benchmark examples are suggested: • no analytical solution is available but commercial codes are capable of solving the problem, • the actual practical problem should be addressed, and simplified in such a way that the solution can be obtained with reasonable computational effort, • no calibration of laboratory tests has been performed (this is done extensively in research and is of minor interest to engineers in practice), • preferably the examples should be set in such a way that, in addition to global results, specific aspects can also be checked (e.g., handling of initial stresses, dilation behaviour, excavation procedures, etc.), • the influence of different constitutive models on the predicted results should become apparent, and • the problem of parameter identification for various constitutive models should be addressed. Once a series of examples has been designed and solutions are available they could serve as: • a check of commercial codes, • learning aids for young geotechnical engineers to help them to become familiar with numerical analysis, and 60 m • verification examples for proving competence in numerical analysis of geotechnical problems. ground surface I In addition, these examples will identify limitations of the present state of the art in numerical modelling in practice, provide the possibility to show alternative modelling layer 1 assumptions, and highlight the importance of 50 m appropriate constitutive models.
II
Tunnel Excavation Example
Figure 52 depicts the geometry of the first validation example, a tunnel excavation problem, and Table 8 contains a list of the material parameters given to all participants in this exercise. Additional specifications are as follows. 9.2.1 • • • •
5m
30 m
General Assumptions
II
12.05 m
9.2
layer 2
plane strain conditions apply, a linear elastic - perfectly plastic analysis with the Mohr-Coulomb failure criterion was I required, perfect bonding existed between the shotcrete and the ground, Figure 52 : Geometry of tunnel excavation example the shotcrete lining should be represented by beam or continuum elements, with 2 rows of elements over the cross-section if continuum elements with quadratic shape function are used, and Table 8. Material parameters for the “tunnel excavation” example E (kN/m2)
ν
φ (o)
c (kN/m2)
Ko
γ (kN/m3)
Layer 1
50000
0.3
28
20
0.5
21
Layer 2
200000
0.25
40
50
0.6
23
Shotcrete (d = 250 mm): linear elastic - E1 = 5 000 MPa, E2 = 15 000 MPa, ν= 0.15
•
to account for deformations occurring ahead of the face (pre-relaxation), the load reduction method or a similar approach should be adopted.
9.2.2
Computational Steps
The following computational steps had to be performed by the analysts: the initial stress state was set to σv = γH, σh = KoγH, and subsequently the deformations were set to zero, the pre-relaxation factors given are valid for the load reduction method, construction stage 1: 40% pre-relaxation of the full cross section, and construction stage 2: excavation of full cross section, installation of shotcrete with E = E2. In addition to a full face excavation, excavation of a top heading and bench was also considered, but these results will not be discussed here (see Schweiger, 1997; 1998).
• • • •
9.2.3
Selected Results
In the following, some of the most interesting results are presented. In Figure 53, surface settlements obtained from 10 different analyses are compared. 50% of the calculations predict 52 or 53 mm as the maximum settlement and most of the others were within approximately 20% of these values. However, the calculations identified as TL1A and TL10 show significantly lower settlements. In both cases the reason was that it was not possible to apply the load reduction method correctly and therefore other methods have been used. TL10 used the stiffness reduction method and it is known that it is difficult to match these two methods (Schweiger et al., 1997). TL9 also employed the stiffness reduction method but obtained larger settlements, which is rather unusual.
0 TL 1A TL 2 TL 4 TL 5 TL 6 TL 7 TL 8 TL 9 TL 10 TL11
1
surface settlement [cm]
2
3
4
5
6 0
5
10
15
20
distance from tunnel axis [m]
Figure 53 : Surface settlements for tunnel excavation in one step
6000
TL 1: 74 5000
max. normal force [kN]
4000
TL 5: -234 TL 2: -178 TL 8: -165
3000
2000
TL 9: 113
1000
0 TL1
TL2
TL4
TL5
calculation
TL8
TL9
TL10
TL 2: 133 TL 4: 144 TL 5: 185 TL 8: 108 TL 10: 200
TL 1: -99 TL 4: -176 TL 9: -317 TL 10: -236
[kNm/m]
Figure 54 : Comparison of maximum normal forces and bending moments in tunnellining
Figure 54 shows the calculated normal forces and bending moments in the shotcrete lining. Reasonable agreement is observed for normal forces, with the exception of TL1 and TL10, but a wide scatter is obtained for bending moments. Figure 54 indicates also the location of maximum bending moments and the significant differences in magnitude (approximately 300%) and location are obvious. Even if TL1, TL9 and TL10 are excluded because they did not refer exactly to the problem specification, the variation is still 70%. Unfortunately, it was not possible from the information available to identify clearly the reasons for these discrepancies but most probably they are due to differences in modelling the lining and in evaluation of the internal forces. 9.3
Deep Excavation Example
0.80 ground surface = 0.0
9.3.1 • •
• • • •
x' -3.0 strut No. 1 -4.0
excavation step 1
-7.0 strut No. 2
layer 1
-8.0
-12.0
excavation step 2 final excavation
diaphragm wall
layer 2
40 m
Figure 55 illustrates the geometry and the excavation stages analysed in this problem, and Table 9 lists the relevant material parameters. Additional specifications are as follows.
15.00
-20.0
-26.0
General Assumptions layer 3
y plane strain conditions apply, x a linear elastic perfectly plastic 60 m analysis with the MohrCoulomb failure criterion was required, Figure 55 : Geometry of the deep excavation example perfect bonding was to be assumed between the diaphragm wall and the ground, struts used in the excavation could be modelled as rigid members (i.e., the horizontal degree of freedom was fixed), any influence of the diaphragm wall construction could be neglected, i.e., the initial stresses were established without the wall, and then the wall was “wished-in-place”, and the diaphragm wall was modelled using either beam or continuum elements, with 2 rows of elements over the cross section if continuum elements with quadratic shape function were adopted.
9.3.2
Computational Steps Table 9. Material parameters for “deep excavation” example E (kN/m2)
ν
φ (o)
c (kN/m2)
Ko
γ (kN/m3)
Layer 1
20000
0.3
35
2.0
0.5
21
Layer 2
12000
0.4
26
10.0
0.65
19
Layer 3
80000
0.4
26
10.0
0.65
19
Diaphragm wall (d = 800 mm): linear elastic E = 21 000 MPa, ν = 0.15, γ = 22 kN/m3
The following computational steps had to be performed by the various analysts: the initial stress state was set to G = J&T,4 = K&Y?, all deformations were set to zero and then the wall was “wished-in-place”, construction stage 1: excavation step 1 to a level of -4.0 m, construction stage 2: 3.0 excavation step 2 to a level of 2.5 -8.0 m, and strut 1 installed at -3.0 m, and E 2.0 construction stage 3: final is excavation to a level of -12.0 j 1.5 m, and strut 2 installed at -7.0 St m. 5 1.0 Comparison of Results It is worth mentioning that 5 out of the 12 calculations submitted for comparison were made by different analysts using the same computer program. Figure 56 compares surface displacements for construction stage 1 and shows 2 groups of results. The lower values for the heave fi-om calculations BGl and BG2 may be explained because of the use of interface elements, which were used despite the fact that the specification did not require them. The results of BG3 and BG12 could not be explained in any detail. There were indications though that for the particular program used a significant difference in vertical displacements was observed depending whether beam or continuum elements were used for modelling the diapbragm wall. This emphasises the significant influence of different modelling assumptions and the need for evaluating the validity of these models under defmed conditions. It may be worth mentioning that this effect was not observed to the same extent in the other programs used. Figure 57 shows the same displacements for the final excavation stage, and the results are now almost evenly distributed between the limiting values.
Y 5
IO
15
20
25
distance from wall [m]
Figure 56 : Vertical displacement of surface behind wall: construction stage 1
6
distance from wall [m]
Figure 57 : Vertical displacement of surface behind wall: final construction stage
61
It is apparent from Figures 56 and 57 that elasto-perfectly plastic constitutive models are not well suited for analysing the _ 0.50 displacement pattern 6 around deep excavations, z 0.25 6 especially for the surface E 0.00 behind the wall because $ a -0.25 the predicted heave is g certainly not realistic. 3 -0.50 However, it was not the c .w 0 -0.75 aim of this exercise to z compare results with c -1.00 actual field observation, -1.25 but merely to see what differences are obtained BGI BG2 BG3 BG4 BG5 BG6 BG7 BG8 BG9 BGlOBGll BG12 when using slightly calculation different modelling assumptions within a rather tight problem specification. Figure 58 : Horizontal displacement of head of wall It is interesting to compare predictions of the horizontal displacement of the head of the wall for excavation step 1. Only 50% of the analyses predict displacements towards the excavation (+ve displacement in Figure 58) whereas the other 50% predict movements towards the soil, which seems to be not very realistic for a cantilever situation. Significant differences were not found in the predictions for the horizontal displacements of the bottom of the wall, the heave inside the excavation and the earth pressure distributions. Calculated bending moments varied within 30% and strut forces for excavation step 2 varied from 155 to 232 kN/m. A more detailed examination of this example can be found in Schweiger (1997, 1998). Tied-Back Deep Excavation Example The final example presented here is closely related to an actual project in Berlin. Slight modifications have been introduced in modelling of the construction sequence, in particular the groundwater lowering, which was performed in various steps in the field, but was modelled in one step prior to excavation. In this example the constitutive model to be used was not prescribed but the choice was left to the analysts. Some basic material parameters have been taken from the literature and additional results from one-dimensional compression tests on loose and dense samples were given to the participants, together with the results of niaxial tests on dense samples. Thus the exercise represents closely the situation that is ofkn faced in practice. Inclimometer measurements made during construction provided information of the actual behaviour in situ, although due to the simplifications mentioned above a one-to-one comparison is not possible. Of course, the measurements were not disclosed to the the participants prior to them submitting their predictions. It is the aim of this section to demonstrate the necessity of performing exercises of this kind. However, due to space limitations only the most relevant aspects of the problem specification will be given here (Figure 59). For the same reason only a limited number of results will be presented. They indicate however the wide scatter of results that can be produced due to different interpretations of the available data. A much more comprehensive discussion of this validation problem may be found in Schweiger (2000). Some reference values for stifmess and strength parameters, obtained from the literature and frequently used in the design of excavations in Berlin sand, are given below.
62
-
=b
o3 d
aw
f2 i
S p
f a
d
c Y
F
5 :G
ES = 2 J 4 Y Y K Y a ’w
l o p f r E V
Y
&
i a k
,
e9
e
sg n f xt
f 00 < z 2 2 m P ( =o d 0
b
to
s a
o a
s
b t u ed o c h
0 sa z r e 0
e
e l
o 1 n r 2 r 3 r f 1 nr 2 r 3 r
i
a m
pa r f
.a
r 1 ce o E=2
oe
r k
Op
l
en e 7 o 9 9
. c . o . o
r
os6
s2 1 I. o 5 mc u .
. c . o . o e d8 N
ot.
cc r
h w w
4 8
. 3 s n
2t 1
im aa r x r a e e e cg e c u
p 0
l
r
Oh
t
ih t8 c 5 0
w
h a3 w 3 w 5 a s u / g
v
dt o
w
k f
=6 & k % f z0 > 2 m P o 0 0 , a r 0 , 0 = 3d se ”n ( m e d s i e u = 1 9 k N / m = 1 k a / 0 N n / d m 3 , = 1-s , i , n + ! T m o h f o o b te r cd e a t b eu o out de l a e a ss m oul os svi pft e t il tm oan m fo c o w o ul a e d - ip r ni l pl u mt e ng cm la l ( ah r e’ u o as e M t f a a s P n p r s o pa u i r t m s oi i s po I a t t nv d r o hf aod e t e r o l eli sa eds o nsu odt una sete m aetoio ldn trn C 1 i a T 3 k 0 tw p; n 0 tPh p0 e = r d I0 wo a a,n r p o t ia 2ro e ao s v o ns t0t s i i n o t ar u a t f e t r eq m n a h slh g suw b d rt u s tae e ueh v e hoo s t t lf se a tr esb i e t w e a er e d c r o ti t nc tsa e mm uf h ro spr I esp a eaao ie en rt s a o a s n ea a rr t p dt oea 1 i k r a vo fmo 0o f P Ee = 1am e efk 0nf wa ss l2e r i t d po n ah ta wr u c ne htt boa nt o l db m iao e vsaod n o H y sa iuao s wt o n d as e t i c i i pph s oa t l nr rr a f i mn u e o ea bt ot n i m d a f o asc y rh t n o t t m T p o t hd r w f hw e i so aa f ee a pp sl o r p e e l l e h cr l =3 0 0 0 0 M = 0 . 1 5 =2 k 4 N / m 3
6
3
The angle of wall friction was specified as φ′/2. 9.4.1 • • • • • • •
Additional specifications for this example are as follows: plane strain conditions could be assumed, any influence of the diaphragm wall construction could be neglected, i.e., the initial stresses were established without the wall, and then the wall was “wished-in-place” and its different unit weight incorporated appropriately, the diaphragm wall could be modelled using either beam or continuum elements, interface elements existed between the wall and the soil, the domain to be analysed was as suggested in Figure 59, the horizontal hydraulic cut-off that existed at a depth of –30.00 m was not to be considered as structural support, and the pre-stressing anchor forces were given as design loads.
9.4.2 • • • • • • • • • •
General Assumptions
Computational Steps
The following computational steps had to be performed by the various analysts: the initial stress state was given by σv = γH, σh = KoγH, the wall was “wished-in-place” and the deformations reset to zero, construction stage 1: groundwater-lowering to –17.90 m, construction stage 2: excavation step 1 (to level -4.80 m), construction stage 3: activation of anchor 1 at level –4.30 m and prestressing, construction stage 4: excavation step 2 (to level -9.30 m), construction stage 5: activation of anchor 2 at level –8.80 m and prestressing, construction stage 6: excavation step 3 (to level -14.35 m), construction stage 7: activation of anchor 3 at level –13.85 m and prestressing, and construction stage 8: excavation step 4 (to level –16.80 m). The length of the anchors and their prestressing loads are indicated in Figure 59.
9.4.3
Brief Summary Of Assumptions Of Submitted Analyses
In Tables 10 and 11 the main features of all analyses submitted have been summarised in order to highlight the different assumptions made, according to the personal preferences and experience of the participants. It follows from Tables 10 and 11 that a wide variety of programs and constitutive models has been employed to solve this problem. Only a limited number utilised the laboratory test results provided in the specification to calibrate their models. Most of the analysts used data from the literature for Berlin sand or their own experience to arrive at input parameters for their analysis. Close inspection of Tables 10 and 11 reveals that only marginal differences exist in the assumptions made about the strength parameters for the sand (everybody believed the experiments in this respect), and the angle of internal friction φ′ was taken as 36° or 37° and a small cohesion was assumed by many authors to increase numerical stability. A significant variation is observed however in the assumption of the dilatancy angle ψ, with values ranging from 0º to 15º. An even more significant scatter is observed in the assumption of the soil stiffness parameters. Although most analysts assumed an increase with depth, either by introducing some sort of power law, similar to the formulation presented by Ohde (1951), which in turn corresponds to the formulation by Janbu (1963), or by defining different layers with different Young’s moduli. Additional variation is introduced by different formulations for the interface elements, element types, domains analysed and modelling of the prestressed anchors. Some computer codes and possibly some analysts may have had problems in modelling the prestressing of the ground anchors, and actually part of the force developed due to deformations occurring in the ground appears lost. Where this is the case a remark has been included in Table 10. The constitutive model “Hardening Soil” corresponds to a shear and volumetric hardening plasticity model provided in the commercially available finite element code PLAXIS. It features also a stress dependent stiffness, different for loading and unloading or reloading paths.
9.4.4
Results
A total of 15 organisations (comprising University Institutes and Consulting Companies from Germany, Austria, Switzerland and Italy), referred to as B1 to B15 in the following, submitted predictions. Figure 60 shows the deflection curves of the diaphragm wall for all entries. It is obvious from the figure that the results scatter over a very wide range, which is unsatisfactory and probably unacceptable to most critical observers of this important validation exercise. For example, the predicted horizontal displacement of the top of the wall varied between –229 mm and +33 mm (-ve means displacement towards the excavation). Looking into more detail in Figure 60, it can be observed that entries B2, B3, B9a and B7 are well out of the “mainstream” of results. These are the ones which derived their input parameters mainly from the oedometer tests provided to all analysts, but it should be remembered that these tests showed very low stiffnesses as compared to values given in the literature. Some others had small errors in the specific weight, but these discrepancies alone cannot account for the large differences in predictions. As mentioned previously, field measurements are available for this project and although the example here has been slightly modified in order to facilitate the calculations, the order of magnitude of displacements is known. Figure 61 shows the measured wall deflections for the final construction stage together with the calculated results. Only those calculations which are considered to be “near” the measured values are included. The scatter is still significant. It should be mentioned that measurements have been taken by inclinometer, but unfortunately no geodetic survey of the wall head is available. It is very likely that the base of the wall does not remain fixed, as was assumed in the interpretation of the inclinometer measurements, and that a parallel shift of the measurement of about 5 to 10 mm would probably reflect the in situ behaviour more accurately. This has been confirmed by other measurements under similar conditions in Berlin. If this is true a maximum horizontal displacement of about 30 mm can be assumed and all entries that are within 100% difference (i.e., up to 60 mm) have been considered in the diagram. The predicted maximum horizontal wall displacements still varied between 7 and 57 mm, and the shapes of the predicted curves are also quite different from the measured shape. Some of the differences between prediction and measurements can be attributed to the fact that the lowering of the groundwater table inside the excavation has been modelled in one step whereas in reality a stepwise drawdown was performed (the same has been assumed by calculation B15). Thus the analyses overpredict horizontal displacements, the amount being strongly dependent on the constitutive model employed, as was revealed in further studies. In addition, it can be assumed that the details of the formulation of the interface element have a significant influence on the lateral deflections of the wall, and arguments similar to those discussed in the previous section for implementing constitutive laws also hold, i.e., no general guidelines and recommendations are currently available. A need for them is clearly evident from this exercise. Figure 62 depicts the calculated surface settlements, again only for the same solutions that are presented in Figure 61. These key displacement predictions vary from settlements of up to approximately 50 mm to surface heaves of about 15 mm. Considering the fact that calculation of surface settlements is one of the main goals of such an analysis, this lack of agreement is disappointing. It also highlights the pressing need for recommendations and guidelines that are capable of minimising the unrealistic modelling assumptions that have been adopted and consequently the unrealistic predictions that have been obtained. The importance of developing such guidelines should be obvious. Figure 63 shows predictions of the development of anchor forces for the upper layer of anchors. Maximum anchor forces for the final excavation stage range from 106 to 634 kN/m. As mentioned previously, some of the analyses did not correctly model the prestressing of the anchors because they do not show the specified prestressing force in the appropriate construction step. Predicted bending moments, important from a design perspective, also differ significantly from 500 to 1350 kNm/m. Taking into account the information presented in Figures 60 to 63 and Table 10, it is interesting to note that no definitive conclusions are possible with respect to the constitutive model or assumptions concerning element types and so on. It is worth mentioning that even with the same finite element code (PLAXIS) and the same constitutive model (Hardening Soil Model) significant differences in the predicted results are observed. Clearly these differences depend entirely on the personal interpretation of the stiffness parameters from the information available. Again, it is noted that a more comprehensive coverage of this exercise is beyond the scope of this paper but further details may be found in Schweiger (2000).
Table 10. Summary of analyses submitted for the tied back excavation problem No.
B1
Domain analysed width x height (m)
Constitutive Model
ν (-)
φ′ (o)
ψ (o)
Tunnel Mohr-Coulomb
0.3
35
5
0.2
36
6
0.41
100 x 100
0.2
36
6
0.41
Code
B2
Plaxis
B2a
Plaxis
B3
Abaqus
B3a Abaqus
Hardening (z < 40 m) Mohr-Coulomb (z > 40 m) Hardening (z < 32 m) Mohr-Coulomb (z > 32 m) Hypoplastic without intergranular strains Hypoplastic with intergranular strains
Ko
Element type - wall
Element type - Element anchor / grout type body Interface
Remark
9 noded continuum
bar / membrane
quadratic
beam
bar / membrane
Einter > EBoden
100 x 100
quadratic
beam
bar / membrane
Rinter = .5
0.42
161 x 162
linear
4 noded continuum
bar / bar
ψ = 20
prestress force ?
0.42
161 x 162
linear
4 noded continuum
bar / bar
ψ = 20
prestress force ?
0.43
105 x 107
linear
4 noded continuum
bar / bar
beam
spring
100 x 64
B4
Ansys Drucker Prager
0.3
35
0
B5
Sofistik Mohr-Coulomb
0.3
35
15
80 x 60
B6
Z_Soil Mohr-Coulomb
0.3
35
15
122 x 90
Zienkiewicz/ Pande/Schad Hardening 2 layers Plaxis (z < 20 m) (z > 20 m)
Element type soil
B7 Feerepgt
0.26 40.5 13.5 0.35
90 x 60
quadratic
continuum
Continuum / continuum
B8
0.2
bar / membrane
35
10
0.43
90 x 70
quadratic
beam
bar / membrane bar / membrane bar / membrane
prestress force ? Rinter = .5
prestress force ? no prestress interface force ? prestress force ? after GE, ψ =20.25 Plaxis
B9
Plaxis Mohr-Coulomb
0.35 35
5
0.43
150 x 100
quadratic
beam
B9a
Plaxis Hardening
0.2
35
10
0.43
150 x 100
quadratic
beam
B10
Plaxis Hardening
0.2
36
6
0.41
100 x 72
quadratic
beam
B11
Hardening 2 layers Plaxis (z < 20 m) (z > 20 m)
0.3
35
0
0.45
150 x 120
quadratic
beam
bar / membrane
Rigid ?
prestress force ?
B12
Plaxis Mohr-Coulomb
0.3
35
4
90 x 92
quadratic
beam
bar / membrane
Rinter=0.5
prestress force ?
100 x 100
linear + internal node
beam
bar
ψ = 15.5
anchors fixed at bound’y
120 x 100
quadratic
8 noded continuum
bar / bar
Dicke = 0 ψ = 17.5
96 x 50
quadratic
beam + continuum
bar / membrane
Hypoplastic with B13 Abaqus intergranular strains Shear Hardening Model with B14 Befe 0.2 small strain stiffness Hardening 2 layers B15 Plaxis (z < 20 m) 0.2 (z > 20 m) 0.2
35
5
0.43
35 41
7 14
0.43 0.34
Rinter =0.5 Rinter =0.5 Rinter=0.61
Rinter =0.8
Table 11. Summary of stiffness parameters used in tied back excavation analyses No.
Constitutive Model
E-dependence
Number of layers
Power
B1
Mohr-Coulomb
similar to Ohde
2
0.5
similar to Ohde linear increase similar to Ohde linear increase Layers Layers
2
0.85
2
0.5
B4 B5
Hardening Soil Mohr-Coulomb Hardening Soil Mohr-Coulomb Drucker Prager Mohr-Coulomb
43 6
-
B6
Mohr-Coulomb
Layers
?
-
B7
Zienkiewicz/ Pande/Schad
-
-
-
B8
Hardening Soil
similar to Ohde
2
0.5
B9 B9a B10
Mohr-Coulomb Hardening Soil Hardening Soil
Layers similar to Ohde similar to Ohde
3 -
0.65 0.5
B11
Hardening Soil
similar to Ohde
2
0.5
B12
Mohr-Coulomb Shear Hardening Model with "small strain stiffness" Hardening Soil Hardening Soil
Layers
9
-
similar to Ohde
-
0.5
similar to Ohde similar to Ohde
2
0.5
B2 B2a
Eref,loading or Emin (kN/m2) z20m: 44 700 z