Thursday, October 29, 2009

Gostei desse artigo...

COMPUTATIONAL MODEL VERIFICATION AND VALIDATION

20 September 2008

authorsDr Leslaw Kwasniewski

Virtual testing, numerical prototyping, computer simulation — each of these terms applies to simulating and analyzing physical processes with the aid of computers. Such processes are depicted by a mathematical (conceptual) model formulated mostly as a system of partial differential equations (PDEs) and by associated boundary and initial conditions. Usually, physical problems of a practical nature, represented by such mathematical models, cannot be solved analytically due to the complexity of, for example, their geometry. To find the solution, a mathematical model is replaced by an approximate computer, a (computerized) model using the process of numerical discretization, which replaces PDEs with sets of algebraic equations more suitable for computers. The discretization of space and time can be done using discretization procedures such as the finite element, finite difference, finite volume, and boundary element methods.

There are many fields of engineering and physics that use the adjective “computational” for computer simulations. (Computational) fluid dynamics, solid mechanics, and structural dynamics are encompassed by Computational Science and Engineering (CS&E) [1] and Computational Engineering and Physics (CE&P) [2]. In solid mechanics and structural dynamics, space discretization is mostly done with the finite element (FE) method mentioned previously. There are numerous computer FE programs that are widely applied in many industries, such as the automotive and aerospace industries. Finite element computer software can be divided into groups: the commercial general-purpose codes and the research codes usually dedicated to more specific, narrower purposes.

CS&E is an area of science and technology that is growing almost as fast as the capabilities of computers. More efficient computers and software result from continuous software development and rapidly increasing computer hardware capabilities. (Moore’s law states that computer power increases by a factor of two every eighteen months [3]). For example the general-purpose code LS-DYNA [4] (a finite element-based simulation software) had originally 50,000 lines of code and then approached 2 million lines in little more than a decade [1]. The improvements in computational capabilities are well illustrated by an example presented in Belytschko [5]: in a 1970s 20 m/s crash test simulation using a 300-element vehicle model took about 30 hours of computer time at a cost equivalent to the three-year salary of a university professor. Today’s applications running on multiprocessor machines allow for increasing the number of finite elements — into the tens of millions in some FE models [6].

On the other hand, an increased number of finite elements in a model do not guarantee its correctness. The widely spreading implementation of computational methods in research and technology raises questions about the predictive capabilities of computer simulations. There are many contradictory opinions about the validity of computer models. Consider G. Box’s well-known statement: “Essentially, all models are wrong, but some are useful” [7]. Early in the development of the finite element method, the Journal of Applied Mechanics rejected FE papers for being insufficiently scientific [5]. Today’s general attitude is definitely evolving towards more acceptance, and the FE method’s results are present in numerous technical and scientific papers from many different research areas.

The level of predictive requirements and capabilities varies for different research areas. A computer simulation can be very precise (sometimes to all zeros and ones) for the modeling of electronic systems, in the numerical prototyping of microchips, for example; but it can be much less exact for the mathematical modeling of the atmosphere, with numerical weather predictions still only short-range. The predictive capacity of computational mechanics, located somewhere between the preceding two examples, depends on the complexity level of the problem under consideration. Belytschko and Mish [8] identified the main barriers to computability as the smoothness and stability of the response and the uncertainties in the load, the boundary and initial conditions, and the constitutive equations. Additional difficulties arise when the physical problem involves coupled physics.

More and more often verification and validation (V&V) is recognized as the primary method for evaluating the confidence of computer simulations [9]. Recently, there has been a lot of attention dedicated to V&V methodology, with much research (see [2] for a review of this literature) and the first guides and standards published as a result (see [10-13]). The difference between verification and validation is probably most accurately expressed by Roache’s informal statement: “Verification deals with mathematics; validation deals with physics” [14]. The objectives for both verification and validation are to identify, remove or reduce, and quantify the errors. Verification uses comparison of the computational solution and highly accurate (analytical or numerical) benchmark solution, whereas validation compares the numerical solution with the experimental data. There are two goals for both verification and validation: to detect a model’s significant discrepancies and to reduce and estimate removable and unavoidable errors. Each phase in the modeling process – the transition from the conceptual model to its mathematical representation, defining the discrete (computational) model and the solution – generates errors and can produce large discrepancies. The various sources of errors and discrepancies for many complex systems cannot be identified without V&V.

There are two perspectives for V&V: that of code developers and that of analysts (users of the codes). In software V&V a code is a customer whereas in model V&V the whole modeling process for a specific physical problem is considered. The code developers are more involved in code verification, while the analyst’s responsibility is more oriented towards validation. However, many commercial codes provide in their documentation examples of validation, and analysts verifying their models can also detect some programming flaws and in this way can participate in the code verification.

Since a complex computational model or large amount of code cannot be totally proven correct (one cannot prove that there are no errors) and can only be disapproved [2], V&V is sometimes compared to accumulating evidence for a legal case [15]. Consequently, due to ever-limited resources (people, money, time), we can have full verification and validation only for simple systems. As stated in Belytschko and Mish [8], the number of execution paths in a typical commercial code is often so large that some paths are never explored, even after years of service. This is why good reliable software should support reports of all known bugs, and be continuously, seriously improved, not merely modified. This is also an argument for supporters of specialized programs who point at the “black box” nature of large, general-purpose commercial codes. On the other hand, commercial codes usually have a large number of users testing the software.

Validation can be considered a final check that reveals all possible errors and can estimate the accuracy of the simulation. This can be done with a comparison of the computational solution with experimental data. Disagreements between them can be caused by the differences between mathematical and physical models, the differences between conceptual and computerized models, and also by the discrepancy between a physical model (our concept of a real process) and the particular subject of an experiment used for the model’s validation. The last situation can happen when our physical model originally represents a family of supposedly similar objects (e.g., a type of vehicle) and the experimentally tested member of the group is not sufficiently representative. The discrepancy can be caused by deterioration due to service or manufacturing flaws.

At that point, we should be able to distinguish conceptually between significant discrepancies, having the precisely determined sources, and the random variation of physical parameters, characterizing the physical model (a group of similar objects) and leading to nondeterministic experimental results. For example, when testing an existing engineering steel structure, we can have unexpected results (a significant discrepancy) due to a few connections missing from those originally designed, or degraded connections. Quite often, not only can experiments provide data for validation, they can also reveal such imperfections through careful analysis and comparison with computerized results. It is our opinion that when there is a significant disagreement between the experiment and the calculations, the reasons should be sought in both places.

No comments:

Post a Comment