{"id":29712,"date":"2023-03-28T15:15:08","date_gmt":"2023-03-28T14:15:08","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=29712"},"modified":"2023-03-28T15:16:08","modified_gmt":"2023-03-28T14:16:08","slug":"high-order-predictive-modelling-methodology-for-optimal-results-with-reduced-uncertainties","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/high-order-predictive-modelling-methodology-for-optimal-results-with-reduced-uncertainties\/29712\/","title":{"rendered":"High-order predictive modelling methodology for optimal results with reduced uncertainties"},"content":{"rendered":"
Professor Dan Gabriel Cacuci highlights a breakthrough methodology which overcomes the curse of dimensionality while combining experimental and computational information to predict optimal values with reduced uncertainties for responses and parameters characterising forward\/inverse problems.<\/h2>\n
The modelling of a physical system and\/or the result of an indirect experimental measurement requires consideration of the following modelling components:<\/p>\n
\n
A mathematical\/computational model comprising equations (expressing conservation laws) that relate the system\u2019s independent variables and parameters to the system\u2019s state (i.e., dependent) variables;<\/li>\n
Probabilistic and\/or deterministic constraints that delimit the ranges of the system\u2019s parameters;<\/li>\n
One or several computational results, customarily referred to as \u2018responses\u2019 (or objective functions, or indices of performance), which are computed using the computational model; and<\/li>\n
Experimentally measured responses, with their respective nominal (mean) values and uncertainties (variances, covariances, skewness, kurtosis, etc.).<\/li>\n<\/ul>\n
The results of either measurements or computations are never perfectly accurate. On the one hand, results of measurements inevitably reflect the influence of experimental errors, imperfect instruments, or imperfectly known calibration standards. Around any reported experimental value, therefore, there always exists a range of values that may also be plausibly representative of the true but unknown value of the measured quantity. On the other hand, computations are afflicted by errors stemming from numerical procedures, uncertain model parameters, boundary\/initial conditions, and\/or imperfectly known physical processes or problem geometry. Therefore, knowing just the nominal values of experimentally measured or computed quantities is insufficient for applications. The quantitative uncertainties accompanying measurements and computations are also needed, along with the respective nominal values. Extracting \u2018best estimate\u2019 values for model parameters and predicted results, together with \u2018best estimate\u2019 uncertainties for these parameters and results, requires the combination of experimental and computational data, including their accompanying uncertainties (standard deviations, correlations).<\/p>\n
Predictive modelling<\/h3>\n
The goal of \u2018predictive modelling\u2019 is to obtain such \u2018best estimate\u2019 optimal values, with reduced uncertainties, to predict future outcomes based on all recognised errors and uncertainties. Predictive modelling requires reasoning using incomplete, error-afflicted and occasionally discrepant information, and comprises three key elements, namely: data assimilation and model calibration; quantification of the validation domain; and model extrapolation.<\/p>\n
\u2018Data assimilation and model calibration\u2019 addresses the integration of experimental data for the purpose of updating parameters underlying the computer\/numerical simulation model. Important components underlying model calibration include quantification of uncertainties in the data and the model, quantification of the biases between model predictions and experimental data, and the computation of the sensitivities of the model responses to the model\u2019s parameters. For large-scale models, the current model calibration methods are hampered by the significant computational effort required for computing exhaustively and exactly the requisite response sensitivities. Reducing this computational effort is paramount, and methods based on adjoint sensitivity models show great promise in this regard.<\/p>\n
The \u2018quantification of the validation domain\u2019 underlying the model under investigation requires estimation of contours of constant uncertainty in the high-dimensional space that characterises the application of interest. In practice, this involves the identification of areas where the predictive estimation of uncertainty meets specified requirements for the performance, reliability, or safety of the system of interest.<\/p>\n
\u2018Model extrapolation\u2019 aims at quantifying the uncertainties in predictions under new environments or conditions, including both untested regions of the parameter space and higher levels of system complexity in the validation hierarchy. Extrapolation of models and the resulting increase of uncertainty are poorly understood, particularly the estimation of uncertainty that results from nonlinear coupling of two or more physical phenomena that were not coupled in the existing validation database.<\/p>\n
The two oldest methodologies that aim at obtaining \u2018best estimate\u2019 optimal values by combining computational and experimental information are the \u2018data adjustment\u2019 method1,2<\/sup> \u2212 which stems from the nuclear energy field \u2212 and the \u2018data assimilation\u2019 method2,3<\/sup> \u2212 which is implemented in the geophysical sciences. Both of these methodologies attempt to minimise, in the least-square sense, a user-defined functional that represents the discrepancies between computed and measured model responses. In contrast to these methodologies, Cacuci4<\/sup> has developed the BERRU-PM predictive modelling methodology by replacing the subjective \u2018user-chosen functional to be minimised\u2019 with the physics-based \u2018maximum entropy\u2019-principles, but is also inherently amenable to high-order formulations.<\/p>\n