Quantifications of Margins and Uncertainties with Dakota
Physical basisThe methods for Quantification of Margins and Uncertainties (QMU) are based on the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) software [Eldred2008], which is embedded within ISSM [Larour2012b,Larour2012a]. Available DAKOTA analyses include sensitivity and sampling analyses, which we respectfully rely on to: 1) understand the sensitivity of model diagnostics to local variations in model fields and 2) identify how variations in model fields impact uncertainty in model diagnostics. Diagnostics of interest include ice volume, maximum velocity, and mass flux across user-specified profiles. Mesh PartitioningQMU analyses are carried out on partitions of the model domain. Each partition consists of a collection of vertices. The ISSM partitioner is versatile. For example, the partitioner can assign one vertex for each partition (linear partitioning); the same number of vertices per partition (unweighted partitioning); or it can weight partitions by a specified amount (equal-area by default - to remove area-specific dependencies). Advanced partitioning is accomplished using the CHACO Software for Partitioning Graphs [Hendrickson1995], prior to setting up the model parameters for QMU analysis. SensitivitySensitivity, or local reliability, analysis computes the local derivative of diagnostics with respect to model inputs. It is used to assess the spatial distribution of this derivative, for the purpose of spatially ranking the influence of various inputs. Given a response that is a function of multiple variables in a local reliability analysis [Coleman1999], we have: where the sensitivities are defined as: If each of the variables is independent, the error propagation equation defines the variance of as: where is the standard deviation of and is the standard deviation of . Importance factors for each are determined by dividing the error propagation equation by r2. Note that the mean of the response is taken to be the response for the nominal value of each variable . Sensitivities are computed from the function evaluations using finite differences. The finite difference step size is user-defined by a parameter in the ISSM model. This analysis imposes the finite-difference step size as a small perturbation to . The resulting sensitivities quantify how the location of errors impact a specified model diagnostic (). First, DAKOTA calls one ISSM model solve for an un-perturbed control simulation. Then, for every , ISSM perturbs each partition one at a time, and calls an ISSM solve for each. At every partition, , a resulting sensitivity, is assigned. Each (defined above) is dependent on how much the outcome diverges from the control run. The result is a spatial mapping of sensitivities and importance factors of for every . For Transient solves, sensitivities are determined only at the completion of a forward run. Method inputs: for each at every partition and the finite difference step Method outputs: sensitivities () and importance factors for each at every partition SamplingSampling analysis quantifies how input errors propagate through a model to impact a specified diagnostic, . It a Monte-Carlo-style method that relies upon repeated execution (samples) of the same model, where input variables are perturbed by different amounts at each partition for each individual run. Resulting statistics (mean, standard deviations, cumulative distribution functions) are calculated after the specified number of samples are run. For a particular sample, every is perturbed by a different amount at each partition. Input values are perturbed randomly, per partition, within a prescribed range (described by a statistical distribution, e.g. normal or uniform). Once the variables are perturbed, the ISSM model solve is called. Distributions: A normal distribution for a particular partition is fully described by an average, , and a standard deviation, . By definition, normal distributions cluster around and decrease towards the tails, in a Gaussian bell curve ranging from . A uniform distribution places greater emphasis on values closer to the tails, where probability of occurrence is equal for any given value within a specified minimum and maximum value. If a user chooses so, any can be treated as a scaled value. In this case, the distribution definitions are given in percentages, relative to a value of 1. For example, at the beginning of a particular sample for a scaled , DAKOTA chooses a random percentage perturbation at each partition . The value of the random percentage will fall within the defined error distribution, and the new value of for duration of this sample run is perturbed by . The generation algorithm for is user-specified (e.g. Monte-Carlo or LHS [Swiler2004]). In the case where the user wants to sample variables at the same time, a is chosen separately for each before a particular sample run. Resulting statistics reflect the combined effects of the errors due to . For Transient simulations, remains constant for the duration of a particular sample run. Note that statistics are determined only at the completion of each forward run. Method inputs: The number of samples to be run and for every , a definition of error distribution (error ranges may vary spatially by partition) Method outputs: For , mean, standard deviations, and cumulative distribution functions resulting from errors due to Model parametersThe parameters relevant to uncertainty quantification can be displayed by typing: >> md.qmu
Building the CHACO and DAKOTA packagesIn order to run DAKOTA with ISSM, you must compile and install the external package DAKOTA ( In addition, your configure script should include the following: --with-chaco-dir=$ISSM_DIR/externalpackages/chaco/install \
--with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \
More recent versions of DAKOTA also require the external package Boost ( --with-boost-dir=$ISSM_DIR/externalpackages/boost/install/ \
Partitioning a MeshTo partition your mesh using CHACO, use the following commands: >>md.qmu.numberofpartitions=1000; % Note: CHACO can crash if too large
>>md=partitioner(md,'package','chaco','npart',md.qmu.numberofpartitions,'weighting','on');
%weighting on for weighted partitioning (equal-area by default), off for equal vertex partitioning
>>md.qmu.partition=md.qmu.partition-1; %With chaco, partition numbers must be adjusted by 1
or, for a 1-to-1 mapping of vertices to partitions: >>md.qmu.numberofpartitions=md.mesh.number_of_vertices;
>>md=partitioner(md,'package','linear');
Setting up the QMUFor sensitivity>>md.qmu.method=dakota_method('nond_l');
This sets the method to local reliability (sensitivity). Other sensitivity settings: >>md.qmu.params.fd_gradient_step_size='0.1'; %finite difference step size, 0.001 by
default
For sampling>>md.qmu.method=dakota_method('nond_samp');
>>md.qmu.method(end)=...
dmeth_params_set(md.qmu.method(end),'seed',1234,'samples',500,'sample_type','lhs');
where Other sampling settings: >>md.qmu.params.tabular_graphics_data=true; %Output all the information needed to create histograms of results
Other simple default settings for both sampling and sensitivity>>md.qmu.params.evaluation_concurrency=1;
>>md.qmu.params.analysis_driver='';
>>md.qmu.params.analysis_components='';
>>md.qmu.params.direct=true;
Setting your QMU variablesExample: Here, the input of interest is md.friction.coefficient, scaled, with error defined as a normal distribution with a mean of 1 and a standard deviation of 10%: >>md.qmu.variables.drag_coefficient=normal_uncertain('scaled_FrictionCoefficient',1,0.1);
This sets the standard deviation to a constant value at every partition. After it is initialized as above, the standard deviation can be set manually, so that it varies spatially by partition: md.qmu.variables.drag_coefficient.stddev=uncertainty_on_partition;
See also: >>help normal_uncertain
>>help uniform_uncertain
>>help AreaAverageOntoPartition
Setting your diagnosticsExample: Here, diagnostics of interest are (1) maximum velocity and (2) mass flux through two different gates. Mass flux gates are defined by the ARGUS files %responses
md.qmu.responses.MaxVel=response_function('MaxVel',[],[0.01 0.25 0.5 0.75 0.99]);
md.qmu.responses.MassFlux1=response_function('indexed_MassFlux_1',[],[0.01 0.25 0.5 0.75 0.99]);
md.qmu.responses.MassFlux2=response_function('indexed_MassFlux_2',[],[0.01 0.25 0.5 0.75 0.99]);
%mass flux profiles
md.qmu.mass_flux_profiles={'../Exp/MassFlux1.exp','../Exp/MassFlux2.exp'};
md.qmu.mass_flux_profile_directory=pwd;
For more options see: >>help response_function
Running a simulationNote: You must set your stress balance tolerance to or smaller in order to avoid the accumulation of numerical residuals between consecutive samples: >>md.stressbalance.restol=10^-5;
To initiate the analysis of choice, use the following commands: >> md.qmu.isdakota=1;
>> md=solve(md,'Masstransport');
The first argument is the model, the second is the nature of the simulation one wants to run. References
|