Up to now, this users guide is only a brief description how to use Clawpack inside AMROC. If other discretization might be available in the future this document will have to be extended.
A new AMR computation has to be assembled out of several pieces of code. For the standard cases this is quite simple, because already existing objects simply have to be linked together with a few application specific functions.
A specific format of the Makefile is required:
$(EQUATION)
selects the particular equations to be solved.
$(EXECUTABLE)
is the name of the executable.
$(OBJS)
Specifies application specific Fortran functions. See below.
Problem.h defines the C++-Objects. In standard cases nothing has to be changed!
Application specific Fortran functions:
init.f | Initial conditions. |
physbd.f | Boundary conditions. Predefined standard functions are in bc and $(EQUATION)/rp. |
combl.f | Initialize application specific common blocks or create specific data files. This function should write a file chem.dat with physical properties for hdf2v3 and hdf2file. |
$(EQUATION)/rp/rpn.f | Equations specific Riemann solvers in normal direction. |
$(EQUATION)/rp/rpt.f | Equations specific Riemann solvers in transverse direction. Not necessary in the one-dimensional case. |
If dimensional splitting is applied by setting method(3) < 0 dummy-routines/rpt.f can be used. | |
$(EQUATION)/rp/flx.f $(EQUATION)/rp/rec.f | These function have to be linked, if slope-limiting is applied by selecting method(2) = 3/4. Otherwise dummy-routines/flx.f and dummy-routines/rec.f may be used. |
$(EQUATION)/rp/chk.f | Physical consistency check for debugging. If always method(4)=0 will be selected dummy-routines/chk.f may be used. |
src.f | Source term for a splitting method. If method(5)=0 dummy-routines/src.f can be linked. |
setaux.f | Set data in an additional aux arrays. method(7)=n sets the number of aux arrays. If no auxiliary data are used (method(7)=0) link with dummy-routines/saux.f. |
The order of the values in SolverControl{}
should not be changed, to ensure that solver.in
can also be read by the visualizers hdf2v3 and hdf2file.
SolverControl {
}
Restart[0] = 0 No restart. = 1 Restart the computation from the last checkpointed state.
AMRSolver {
}
Cells(1/2/3)[2] Cell size of entire coarsest grid. Set the cells size of unused dimensions to 1! The visualizers hdf2v3 and hdf2file, which are written for general 3d problems read the solver.in-file, too. The number of cells in each direction should always be dividable by GuCFactor^{?} (2 by default). GeomMin(1/2/3)[0.0]
GeomMax (1/2/3)[1.0]Geometric dimensions of the entire grid. Set GeomMin and GeomMax to 0.0 for unused dimensions to be compatible with hdf2v3 and hdf2file.
PeriodicBoundary(1/2/3)[0] = 0 No periodic boundary. = 1 Periodic boundary conditions in particular direction. CutOffs[0] n < 11 Number of regions that are cut out of the base grid. CCells(i,1/2,1/2/3) Definition of the ith cut out region. In each direction the index of the first and the last cell to be cut out should be given. The number of cells of a cut out region in each direction should always be dividable by GuCFactor (2 by default).
MaxLevels[1] = l Max. number of levels allowed. RefineFactor(1...l)[2] = Refinement factor used on a particular level to create subgrids.
Distribution[1] = 1 DAGHCompositeDistribution (Hilbert's space filling curve). The following values select a fixed distribution of the coarsest grid. Higher levels are not considered. = 2 DAGHBlockXDistribution = 3 DAGHBlockYDistribution = 4 DAGHBlockZDistribution = 5 DAGHBlockXYDistribution = 6 DAGHBlockYZDistribution = 7 DAGHBlockXZDistribution = 8 DAGHBlockAllDistribution GuCFactor[1] >= 1 Important especially for DAGHCompositeDistribution. Coarsening factor on coarsest level for grid units that are the basis for the distribution algorithm. E.g. the usage of the shadow hierarchy for error estimation requires GuCFactor>=2. GuCFactor should always be a power of 2! RedistributeEvery[0] = 0 Redistribute the hierarchy always. = n Redistribute the hierarchy only after n coarse level steps.
CheckEvery[0] = n Write checkpointing files after n coarse level time steps. Use 0 to suppress writting checkpointing files completely. CheckpointName = Name of checkpointing files.
OutputName(1..components) = Basic file name for each component that is written out. Use - as a name to suppress a particular component.
StepControls[1] n<11 Number of time step controls that follow. </td> (1..n)-Time { Outputs[0] = n HDF-outputs. Files are written exactly at times LastTime/n. StepMode[0] = 0 Use fixed TimeStep [0.01] until LastTime [0.0] is reached. = 1 Variable time steps on basis of CFLControl [0.8] until LastTime is reached. Start with TimeStep as first step. TimeStepMax [0.0] is the maximal time step possible. = 2 Same as 1. Reject a step if CFL-No. is greater that CFLRestart [0.9]. }
MinEfficiency[0.7] < 1.0 Efficiency of the cluster algorithm. Minimal portion of cells flagged for refinement in a subgrid. BlockWidth[1] = n Minimal size in all directions of new subgrids. BufferWidth[2] = n Buffer size around flagged cells. Minimal value is 1. 2 is the ideal value. NestingBuffer[1] = n Minimal buffer size around subgrids. Conservative correction requires a minimal buffer size of 1.
DoFixup[1] = 1 Apply conservative correction at coarse-fine interfaces. Should always be turned on. = 0 No conservative correction.
RegridEvery[1] = 1 Always set flags and recompose the hierarchy when necessary. = 0 No recomposition of the hierarchy. Especially for testing.
ClawpackIntegrator {
Method(1) = not used. Method(2) = 1 if only first order increment waves are to be used. = 2 if second order correction terms are to be added, with a flux-limiter as specified by mthlim. = 3 ==> Slope-limiting of the conserved variables, with a slope-limiter as specified by mthlim(1). > 3 ==> User defined slope-limiter method. Slope-limiting is intended to be used with dimensional splitting! If slope-limiting is applied $(EQUATION)/rp/flx.f
and$(EQUATION)/rp/rec.f
have to be linked. Otherwisedummy-routines/flx.f
anddummy-routines/rec.f
may be used.1D case
Method(3) = not used. 2D case
Method(3) = -1 Gives dimensional splitting using Godunov splitting, i.e. formally first order accurate. On a single computational nodes =-1 and =-2 will give identical results, but =-1 will be faster. = -2 Dimensional splitting using Godunov splitting with boundary update after each directional step. The necessary ghost cell synchronization is done by the surrounding AMROC framework. This selection ensures that the solution of the splitting method is independent of the number of computational nodes. If dimensional splitting is applied dummy-routines/rpt.f
can be used.= 0 Gives the Donor cell method. No transverse propagation of neither the increment wave nor the correction wave. = 1 if transverse propagation of increment waves (but not correction waves, if any) is to be applied. = 2 if transverse propagation of correction waves is also to be included. 3D case
Method(3) = -1 Gives dimensional splitting using Godunov splitting, i.e. formally first order accurate. On a single computational nodes =-1 and =-2 will give identical results, but =-1 will be faster. = -2 Dimensional splitting using Godunov splitting with boundary update after each directional step. The necessary ghost cell synchronization is done by the surrounding AMROC framework. This selection ensures that the solution of the splitting method is independent of the number of computational nodes. If dimensional splitting is applied dummy-routines/rpt.f
can be used.= 0 Gives the Donor cell method. No transverse propagation of neither the increment wave nor the correction wave. = 10 Transverse propagation of the increment wave as in 2D. Note that method (2,10) is unconditionally unstable. = 11 Corner transport upwind of the increment wave. Note that method (2,11) also is unconditionally unstable. = 20 Both the increment wave and the correction wave propagate as in the 2D case. Only to be used with method(2) = 2. = 21 Corner transport upwind of the increment wave, and the correction wave propagates as in 2D. Only to be used with method(2) = 2. = 22 3D propagation of both the increment wave and the correction wave. Only to be used with method(2) = 2. Recommended settings: First order schemes: (1,10) Stable for CFL < 1/2
(1,11) Stable for CFL < 1Second order schemes: (2,20) Stable for CFL < 1/2
(2,22) Stable for CFL < 1WARNING! The schemes (2,10), (2,11) are unconditionally unstable. Method(4)[0] Determines if an auxiliary function checks the physical consistency of the data in a subgrid. = 0 No consistency check. In this case dummy-routines/chk.f can be used. Otherwise $(EQUATION)/rp/chk.f
should be linked.= 1 Consistency check before computation of a subgrid. = 11 Like 1. But program is aborted immediately if check fails. Method(5)[0] Source term if splitting is used. = 0 No source term. In this case dummy-routines/srcxx.f can be used. = 1 Godunov splitting. = 2 Strang splitting. Method(6)[0] Use capa differencing. Never used. Method(7)[0] = n Number of components in additional auxillary array. If n=0 dummy-routines/saux.f can be used
Limiter(1...waves)[0] = 1 Minmod = 2 Superbee = 3 van Leer = 4 Monotinized centered = 5 van Albada
}
AMRFlagging {
}
TolGrad(1...components)[0.0] = Limit for scaled gradient FORMELy(comp,i+1)-y(comp,i)FORMEL for a particular component TolErrEst(1..components)[0.0] = Limit for absolute error from Richardson extrapolation for a particular component. If TolErrEst=0.0 for all components no error estimation is calculated.
Other flagging criteria (e.g. for derived quantities or relative errors) are available, if Problem.h uses an extended object for flagging.
-- RalfDeiterding - 11 Dec 2004