Thorn Guide¶
Typographical Conventions:
- \(< \dots >\) Indicates a compulsory argument.
- \([ \dots ]\) Indicates an optional argument.
- \(|\) Indicates an exclusive or.
Parameter File Syntax¶
A parameter file is used to control the behaviour of a Cactus executable. It specifies initial values for parameters as defined in the various thorns’ param.ccl files. The name of a parameter file is often given the suffix .par, but this is not mandatory.
A parameter file is a text file whose lines are either comments or parameter statements. Comments are begin with either ‘#’ or ‘!’. A parameter statement consists of one or more parameter names, followed by an ‘=’, followed by the value for this parameter.
The first parameter statement in any parameter file should set ActiveThorns, which is a special parameter that tells the program which thorns are to be activated. Only parameters from active thorns can be set (and only those routines scheduled by active thorns are run). By default, all thorns are inactive.
Flesh¶
parameter¶
Description of this simulation
>>> Cactus::cctk_run_title = "Description of this simulation"
Give detailed information for each warning statement
>>> Cactus::cctk_full_warnings = yes WARNING level 3 from host ubuntu process 0 while executing schedule bin CCTK_BASEGRID, routine IOASCII::IOASCII_Choose1D in thorn IOUtil, file /home4/yuliu/Cactus/arrangements/CactusBase/IOUtil/src/Utils.c:361: -> IOUtil_1DLines: Found no default Cartesian coordinate system associated with grid variables of dimension 2, and no slice center index coordinates were given either - slice center will not be set up for output of 1D lines from 2D variables >>> Cactus::cctk_full_warnings = no WARNING[L3,P0] (IOUtil): IOUtil_1DLines: Found no default Cartesian coordinate system associated with grid variables of dimension 2, and no slice center index coordinates were given either - slice center will not be set up for output of 1D lines from 2D variables
Level | Description |
---|---|
0 | abort the Cactus run |
1 | the results of this run will be wrong, |
2 | the user should know about this, but the problem is not terribly surprising |
3 | this is for small problems that can probably be ignored, but that careful people may want to know about |
4 | these messages are probably useful only for debugging purposes |
Print the scheduling tree to standard output
>>> Cactus::cctk_show_schedule = yes if (recover initial data) Recover parameters endif Startup routines [CCTK_STARTUP] Startup routines which need an existing grid hierarchy [CCTK_WRAGH] Parameter checking routines [CCTK_PARAMCHECK] Initialisation if (NOT (recover initial data AND recovery_mode is 'strict')) [CCTK_PREREGRIDINITIAL] Set up grid hierarchy [CCTK_POSTREGRIDINITIAL] [CCTK_BASEGRID] [CCTK_INITIAL] [CCTK_POSTINITIAL] Initialise finer grids recursively Restrict from finer grids [CCTK_POSTRESTRICTINITIAL] [CCTK_POSTPOSTINITIAL] [CCTK_POSTSTEP] endif if (recover initial data) [CCTK_BASEGRID] [CCTK_RECOVER_VARIABLES] [CCTK_POST_RECOVER_VARIABLES] endif if (checkpoint initial data) [CCTK_CPINITIAL] endif if (analysis) [CCTK_ANALYSIS] endif Output grid variables do loop over timesteps [CCTK_PREREGRID] Change grid hierarchy [CCTK_POSTREGRID] Rotate timelevels iteration = iteration+1 t = t+dt [CCTK_PRESTEP] [CCTK_EVOL] Evolve finer grids recursively Restrict from finer grids [CCTK_POSTRESTRICT] [CCTK_POSTSTEP] if (checkpoint) [CCTK_CHECKPOINT] endif if (analysis) [CCTK_ANALYSIS] endif Output grid variables enddo Termination routines [CCTK_TERMINATE] Shutdown routines [CCTK_SHUTDOWN] Routines run after changing the grid hierarchy: [CCTK_POSTREGRID] >>> Cactus::cctk_show_schedule = no None
Provide runtime of each thorn
>>> Cactus::cctk_timer_output = "full" =================================================================================================== Thorn | Scheduled routine in time bin | gettimeofday [secs] | getrusage [secs] =================================================================================================== CoordBase | Register a GH extension to store the coo| 0.00000400 | 0.00000000 --------------------------------------------------------------------------------------------------- | Total time for CCTK_STARTUP | 0.00000400 | 0.00000000 =================================================================================================== --------------------------------------------------------------------------------------------------- | Total time for simulation | 0.00004400 | 0.00000000 =================================================================================================== >>> Cactus::cctk_timer_output = "off" None
Condition on which to terminate evolution loop
>>> Cactus::terminate = "iteration" >>> Cactus::cctk_itlast = 0 ---------------- it | | | t | ---------------- 0 | 0.000 | >>> Cactus::terminate = "iteration" >>> Cactus::cctk_itlast = 5 ---------------- it | | | t | ---------------- 0 | 0.000 | 1 | 1.000 | 2 | 2.000 | 3 | 3.000 | 4 | 4.000 | 5 | 5.000 | >>> Cactus::terminate = "time" >>> Cactus::cctk_initial_time = 10 >>> Cactus::cctk_final_time = 15 ---------------- it | | | t | ---------------- 0 | 10.000 | 1 | 11.000 | 2 | 12.000 | 3 | 13.000 | 4 | 14.000 | 5 | 15.000 |
Grid¶
CoordBase¶
The CoordBase thorn provides a method of registering coordinate systems and their properties.
CoordBase provides a way for specifying the extent of the simulation domain that is independent of the actual coordinate and symmetry thorns. This is necessary because the size of the physical domain is not necessarily the same as the size of the computational grid, which is usually enlarged by symmetry zones and/or boundary zones.
Note
The black hole “source” region has a length scale of \(G M / c^{2}\), where G is Newton’s constant, M is the total mass of the two black holes, and c is the speed of light. The gravitational waves produced by the source have a length scale up to \(\sim 100 G M / c^{2}\). The source region requires grid zones of size \(\lesssim 0.01 G M / c^{2}\) to accurately capture the details of the black holes’ interaction, while the extent of the grid needs to be several hundred \(G M / c^{2}\) to accurately capture the details of the gravitational wave signal.
Parameter¶
Specifying the extent of the physical domain.
>>> CoordBase::domainsize = "minmax" # lower and upper boundary locations >>> CoordBase::xmin = -540.00 >>> CoordBase::ymin = -540.00 >>> CoordBase::zmin = -540.00 >>> CoordBase::xmax = 540.00 >>> CoordBase::ymax = 540.00 >>> CoordBase::zmax = 540.00 >>> CoordBase::spacing = "gridspacing" # grid spacing >>> CoordBase::dx = 1 >>> CoordBase::dx = 18.0 >>> CoordBase::dy = 18.0 >>> CoordBase::dz = 18.0 INFO (Carpet): CoordBase domain specification for map 0: physical extent: [-540,-540,-540] : [540,540,540] ([1080,1080,1080]) base_spacing : [18,18,18]
Boundary description.
>>> CoordBase::boundary_size_x_lower = 3 >>> CoordBase::boundary_size_y_lower = 3 >>> CoordBase::boundary_size_z_lower = 3 >>> CoordBase::boundary_size_x_upper = 3 >>> CoordBase::boundary_size_y_upper = 3 >>> CoordBase::boundary_size_z_upper = 3 >>> CoordBase::boundary_internal_x_lower = "no" >>> CoordBase::boundary_internal_y_lower = "no" >>> CoordBase::boundary_internal_z_lower = "no" >>> CoordBase::boundary_internal_x_upper = "no" >>> CoordBase::boundary_internal_y_upper = "no" >>> CoordBase::boundary_internal_z_upper = "no" >>> CoordBase::boundary_staggered_x_lower = "no" >>> CoordBase::boundary_staggered_y_lower = "no" >>> CoordBase::boundary_staggered_z_lower = "no" >>> CoordBase::boundary_staggered_x_upper = "no" >>> CoordBase::boundary_staggered_y_upper = "no" >>> CoordBase::boundary_staggered_z_upper = "no" >>> CoordBase::boundary_shiftout_x_lower = 0 >>> CoordBase::boundary_shiftout_y_lower = 0 >>> CoordBase::boundary_shiftout_z_lower = 0 >>> CoordBase::boundary_shiftout_x_upper = 0 >>> CoordBase::boundary_shiftout_y_upper = 0 >>> CoordBase::boundary_shiftout_z_upper = 0 INFO (Carpet): Boundary specification for map 0: nboundaryzones: [[3,3,3],[3,3,3]] is_internal : [[0,0,0],[0,0,0]] is_staggered : [[0,0,0],[0,0,0]] shiftout : [[0,0,0],[0,0,0]] INFO (Carpet): CoordBase domain specification for map 0: interior extent: [-522,-522,-522] : [522,522,522] ([1044,1044,1044]) exterior extent: [-576,-576,-576] : [576,576,576] ([1152,1152,1152])
Warning¶
The boundary must be contained in the active part of the domain boundaries <= domain_active
>>> CoordBase::xmin = -200.0 >>> CoordBase::xmax = +200.0
CartGrid3D¶
CartGrid3D allows you to set up coordinates on a 3D Cartesian grid in a flexible manner.
Parameter¶
Get specification from CoordBase
>>> CartGrid3D::type = "coordbase"
Get specification from MultiPatch
>>> CartGrid3D::type = "multipatch" >>> CartGrid3D::set_coordinate_ranges_on = "all maps"
Output¶
3D Cartesian grid coordinates
>>> CarpetIOHDF5::out2D_vars = "grid::coordinates"
SymBase¶
Thorn SymBase provides a mechanism by which symmetry conditions can register routines that handle this mapping when a global interpolator is called.
Adaptive Mesh Refinement¶
It is often the case in simulations of physical systems that the most interesting phenomena may occur in only a subset of the computational domain. In the other regions of the domain it may be possible to use a less accurate approximation, thereby reducing the computational resources required, and still obtain results which are essentially similar to those obtained if no such reduction is made.
In particular, we may consider using a computational mesh which is non-uniform in space and time, using a finer mesh resolution in the “interesting” regions where we expect it to be necessary, and using a coarser resolution in other areas. This is what we mean by mesh refinement (MR).
Carpet¶
Carpet is a mesh refinement driver for Cactus. It knows about a hierarchy of refinement levels, where each level is decomposed into a set of cuboid grid patches. The grid patch is the smallest unit of grid points that Carpet deals with. Carpet parallelises by assigning sets of grid patches to processors.
Each grid patch can be divided in up to four zones: the interior, the outer boundary, and the ghost zone, and the refinement boundary. The interior is where the actual compuations go on. The outer boundary is where the users’ outer boundary condition is applied; from Carpet’s point of view, these two are the same. The ghost zones are boundaries to other grid patches on the same refinement level (that might live on a different processor). The refinement boundary is the boundary of the refined region in a level, and it is filled by prolongation (interpolation) from the next coarser level.
Note
Carpet does not handle staggered grids. Carpet does not provide cell-centered refinement. Carpet always enables all storage.
Parameter¶
Use the domain description from CoordBase to specify the global extent of the coarsest grid.
>>> Carpet::domain_from_coordbase = yes
Use the domain description from MultiPatch
>>> Carpet::domain_from_multipatch = yes
Maximum number of refinement levels, including the base level
>>> Carpet::max_refinement_levels = 1
Ghost zones in each spatial direction
Note
Grid patches that are on the same refinement level never overlap except with their ghost zones. Conversly, all ghost zones must overlap with a non-ghost zone of another grid patch of the same level.
>>> Carpet::ghost_size = 3 INFO (Carpet): Base grid specification for map 0: number of coarse grid ghost points: [[3,3,3],[3,3,3]]
Fill past time levels from current time level after calling initial data routines.
Note
The boundary values of the finer grids have to be calculated from the coarser grids through interpolation. An because the time steps on the finer grids are smaller, there is not always a corresponding value on the coarser grids available. This makes it necessary to interpolate in time between time steps on the coarser grids. three time levels allow for a second order interpolation in time.
>>> Carpet::init_fill_timelevels = "yes" >>> Carpet::init_3_timelevels = "no" # Set up 3 timelevels of initial data
Carpet currently supports polynomial interpolation, up to quadratic interpolation in time, which requires keeping at least two previous time levels of data. It also supports up to quintic interpolation in space, which requires using at least three ghost zones.
>>> Carpet::prolongation_order_space = 5 >>> Carpet::prolongation_order_time = 2
Refinement factor
>>> Carpet::refinement_factor = 2
Temporal refinement factors over the coarsest level
>>> grid::dxyz = 18 >>> time::dtfac = 0.25 >>> Carpet::max_refinement_levels = 7 >>> Carpet::time_refinement_factors = "[1,1,2,4,8,16,32]" INFO (Time): Timestep set to 4.5 (courant_static) INFO (Time): Timestep set to 4.5 (courant_static) INFO (Time): Timestep set to 2.25 (courant_static) INFO (Time): Timestep set to 1.125 (courant_static) INFO (Time): Timestep set to 0.5625 (courant_static) INFO (Time): Timestep set to 0.28125 (courant_static) INFO (Time): Timestep set to 0.140625 (courant_static)
Use buffer zones
>>> Carpet::use_buffer_zones = "yes" INFO (Carpet): Buffer zone counts (excluding ghosts): [0]: [[0,0,0],[0,0,0]] [1]: [[9,9,9],[9,9,9]] [2]: [[9,9,9],[9,9,9]] [3]: [[9,9,9],[9,9,9]] [4]: [[9,9,9],[9,9,9]] [5]: [[9,9,9],[9,9,9]] [6]: [[9,9,9],[9,9,9]]
Carpet uses vertex-centered refinement. That is, each coarse grid point coincides with a fine grid point.
>>> Carpet::refinement_centering = "vertex"
Print detailed statistics periodically
>>> Carpet::output_timers_every = 512 >>> Carpet::schedule_barriers = "yes" # Insert barriers between scheduled items, so that timer statistics become more reliable (slows down execution) >>> Carpet::sync_barriers = "yes" # Insert barriers before and after syncs, so that the sync timer is more reliable (slows down execution)
Try to catch uninitialised grid elements or changed timelevels.
Note
Checksumming and poisoning are means to find thorns that alter grid variables that should not be altered, or that fail to fill in grid variables that they should fill in.
>>> Carpet::check_for_poison = "no" >>> Carpet::poison_new_timelevels = "yes" >>> CarpetLib::poison_new_memory = "yes" >>> CarpetLib::poison_value = 114
Base Multigrid level and factor
>>> Carpet::convergence_level = 0 >>> Carpet::convergence_factor = 2 INFO (Carpet): Adapted domain specification for map 0: convergence factor: 2 convergence level : 0
Output¶
File name to output grid coordinates.
>>> Carpet::grid_coordinates_filename = "carpet-grid.asc"
Warning¶
- INFO (Carpet): There are not enough time levels for the desired temporal prolongation order in the grid function group “ADMBASE::METRIC”. With Carpet::prolongation_order_time=2, you need at least 3 time levels.
CarpetLib¶
This thorn contains the backend library that provides mesh refinement. CarpetLib contains of three major parts:
a set of generic useful helpers
A class
vect<T,D>
provides small D-dimensional vectors of the type T. A vect corresponds to a grid point location. The classbbox<T,D>
provides D-dimensional bounding boxes using type T as indices. A bbox defines the location and shape of a grid patch. Finally,bboxset<T,D>
is a collection of bboxes.the grid hierarchy and data handling
The grid hierarchy is described by a set of classes. Except for the actual data, all structures and all information is replicated on all processors.
interpolation operators.
The interpolators used for restriction and prolongation are different from those used for the generic interpolation interface in Cactus. The reason is that interpolation is expensive, and hence the interpolation operators used for restriction and prolongation have to be streamlined and optimised. As one knows the location of the sampling points for the interpolation, one can calculate the coefficients in advance, saving much time compared to calling a generic interpolation interface.
Parameter¶
Provide one extra ghost point during restriction for staggered operators.
>>> CarpetLib::support_staggered_operators = "yes"
CarpetRegrid2¶
Set up refined regions by specifying a set of centres and radii about them. All it does is take a regridding specification from the user and translate that into a gh
.
gh
is a grid hierarchy. It describes, for each refinement level, the location of the grid patches. Thisgh
does not contain ghost zones or prolongation boundaries. There exists only one commongh
for all grid functions.dh
is a data hierarchy. It extends the notion of agh
by ghost zones and prolongation boundaries. Thedh
does most of the bookkeeping work, deciding which grid patches interact with what other grid patches through synchronisation, prolongation, restriction, and boundary prolongation. As all grid functions have the same number of ghost zones, there exists also only onedh
for all grid functions.
Note
To regrid means to select a new set of grid patches for each refinement level. To recompose the grid hierarchy means to move data around. Regridding is only about bookkeeping, while recomposing is about data munging.
Parameter¶
Set up refined regions by specifying a set of centres and radii about them.
>>> Carpet::max_refinement_levels = 7 >>> CarpetRegrid2::num_centres = 2 >>> CarpetRegrid2::num_levels_1 = 7 >>> CarpetRegrid2::num_levels_2 = 7 >>> CarpetRegrid2::position_x_1 = -15.2 >>> CarpetRegrid2::position_x_2 = 15.2 >>> >>> CarpetRegrid2::radius_1[1] = 270.0 >>> CarpetRegrid2::radius_1[2] = 162.0 >>> CarpetRegrid2::radius_1[3] = 94.5 >>> CarpetRegrid2::radius_1[4] = 40.5 >>> CarpetRegrid2::radius_1[5] = 27.0 >>> CarpetRegrid2::radius_1[6] = 13.5 >>> >>> CarpetRegrid2::radius_2[1] = 270.0 >>> CarpetRegrid2::radius_2[2] = 162.0 >>> CarpetRegrid2::radius_2[3] = 94.5 >>> CarpetRegrid2::radius_2[4] = 40.5 >>> CarpetRegrid2::radius_2[5] = 27.0 >>> CarpetRegrid2::radius_2[6] = 13.5 >>> >>> Carpet::refinement_factor = 2 INFO (CarpetRegrid2): Centre 1 is at position [-15.2,0,0] with 7 levels INFO (CarpetRegrid2): Centre 2 is at position [15.2,0,0] with 7 levels INFO (Carpet): Grid structure (superregions, coordinates): [0][0][0] exterior: [-576.000000000000000,-576.000000000000000,-576.000000000000000] : [576.000000000000000,576.000000000000000,576.000000000000000] : [18.000000000000000,18.000000000000000,18.000000000000000] [1][0][0] exterior: [-369.000000000000000,-351.000000000000000,-351.000000000000000] : [369.000000000000000,351.000000000000000,351.000000000000000] : [9.000000000000000,9.000000000000000,9.000000000000000] [2][0][0] exterior: [-216.000000000000000,-202.500000000000000,-202.500000000000000] : [216.000000000000000,202.500000000000000,202.500000000000000] : [4.500000000000000,4.500000000000000,4.500000000000000] [3][0][0] exterior: [-130.500000000000000,-114.750000000000000,-114.750000000000000] : [130.500000000000000,114.750000000000000,114.750000000000000] : [2.250000000000000,2.250000000000000,2.250000000000000] [4][0][0] exterior: [-66.375000000000000,-50.625000000000000,-50.625000000000000] : [66.375000000000000,50.625000000000000,50.625000000000000] : [1.125000000000000,1.125000000000000,1.125000000000000] [5][0][0] exterior: [-47.250000000000000,-32.062500000000000,-32.062500000000000] : [47.250000000000000,32.062500000000000,32.062500000000000] : [0.562500000000000,0.562500000000000,0.562500000000000] [6][0][0] exterior: [-31.218750000000000,-16.031250000000000,-16.031250000000000] : [31.218750000000000,16.031250000000000,16.031250000000000] : [0.281250000000000,0.281250000000000,0.281250000000000]
Regrid every n time steps
>>> CarpetRegrid2::regrid_every = 32 # 2**(max_refinement_levels - 2)
Minimum movement to trigger a regridding
>>> CarpetRegrid2::movement_threshold_1 = 0.10 # Regrid only if the regions have changed sufficiently INFO (CarpetRegrid2): Refined regions have not changed sufficiently; skipping regridding
Warning¶
PARAMETER ERROR (CarpetRegrid2): The number of requested refinement levels is larger than the maximum number of levels specified by Carpet::max_refinement_levels
>>> Carpet::max_refinement_levels = <number>
VolumeIntegrals_GRMHD¶
Thorn for integration of spacetime quantities. The center of mass (CoM) can be used to set the AMR center.
Note
In a binary neutron star simulation, the maximum density is \(\sim 1\), with atmosphere values surrounding the stars of at most \(\sim\) 1e-6. The concern is that a bad conservatives-to-primitives inversion called before this diagnostic might cause Gamma to be extremely large. Capping Gamma at 1e4 should prevent the CoM integral from being thrown off significantly, which would throw off the NS tracking.
Parameter¶
Set verbosity level: 1=useful info; 2=moderately annoying (though useful for debugging)
>>> VolumeIntegrals_GRMHD::verbose = 1
How often to compute volume integrals?
>>> VolumeIntegrals_GRMHD::VolIntegral_out_every = 32 # 2**(max_refinement_levels - 2)
Number of volume integrals to perform
>>> VolumeIntegrals_GRMHD::NumIntegrals = 6 >>> >>> # Integrate the entire volume with an integrand of 1. (used for testing/validation purposes only). >>> VolumeIntegrals_GRMHD::Integration_quantity_keyword[1] = "one" >>> >>> # To compute the center of mass in an integration volume originally centered at (x,y,z) = (-15.2,0,0) with a coordinate radius of 13.5. Also use the center of mass integral result to set the ZEROTH AMR center. >>> VolumeIntegrals_GRMHD::Integration_quantity_keyword[2] = "centerofmass" >>> VolumeIntegrals_GRMHD::volintegral_sphere__center_x_initial [2] = -15.2 >>> VolumeIntegrals_GRMHD::volintegral_inside_sphere__radius [2] = 13.5 >>> VolumeIntegrals_GRMHD::amr_centre__tracks__volintegral_inside_sphere[2] = 0 >>> >>> # Same as above, except use the integrand=1 (for validation purposes, to ensure the integration volume is approximately 4/3*pi*13.5^3). >>> VolumeIntegrals_GRMHD::Integration_quantity_keyword[3] = "one" >>> VolumeIntegrals_GRMHD::volintegral_sphere__center_x_initial [3] = -15.2 >>> VolumeIntegrals_GRMHD::volintegral_inside_sphere__radius [3] = 13.5 >>> >>> # The neutron star originally centered at (x,y,z) = (+15.2,0,0). Also use the center of mass integral result to set the ONETH AMR center. >>> VolumeIntegrals_GRMHD::Integration_quantity_keyword[4] = "centerofmass" >>> VolumeIntegrals_GRMHD::volintegral_sphere__center_x_initial [4] = 15.2 >>> VolumeIntegrals_GRMHD::volintegral_inside_sphere__radius [4] = 13.5 >>> VolumeIntegrals_GRMHD::amr_centre__tracks__volintegral_inside_sphere[4] = 1 >>> >>> # Same as above, except use the integrand=1 (for validation purposes, to ensure the integration volume is approximately 4/3*pi*13.5^3). >>> VolumeIntegrals_GRMHD::Integration_quantity_keyword[5] = "one" >>> VolumeIntegrals_GRMHD::volintegral_sphere__center_x_initial [5] = 15.2 >>> VolumeIntegrals_GRMHD::volintegral_inside_sphere__radius [5] = 13.5 >>> >>> # Perform rest-mass integrals over entire volume. >>> VolumeIntegrals_GRMHD::Integration_quantity_keyword[6] = "restmass"
Enable output file
>>> VolumeIntegrals_GRMHD::enable_file_output = 1 >>> VolumeIntegrals_GRMHD::outVolIntegral_dir = "volume_integration"
Gamma speed limit
>>> VolumeIntegrals_GRMHD::CoM_integrand_GAMMA_SPEED_LIMIT = 1e4
CarpetTracker¶
Object coordinates are updated by CarpetTracker, which provides a simple interface to the object trackers PunctureTracker and NSTracker in order to have the refined region follow the moving objects.
Parameter¶
Spherical surface index or name which is the source for the location of the refine regions. (Maximum number of tracked surfaces less than 10)
>>> CarpetTracker::surface[0] = 0 <surface index> >>> CarpetTracker::surface_name[0] = "Righthand NS" <surface name>
NSTracker¶
This thorn can track the location of a neutron star, e.g. to guide mesh refinement.
Parameter¶
Index or Name of the sperical surface which should be moved around
>>> NSTracker::NSTracker_SF_Name = "Righthand NS" >>> NSTracker::NSTracker_SF_Name_Opposite = "Lefthand NS" <surface name> >>> NSTracker::NSTracker_SF_Index = 0 >>> NSTracker::NSTracker_SF_Index_Opposite = 1 <surface index>
Allowed to move if new location is not too far from old.
>>> NSTracker::NSTracker_max_distance = 3
grid scalar group containing coordinates of center of star
>>> NSTracker::NSTracker_tracked_location = "Hydro_Analysis::Hydro_Analysis_rho_max_loc" # location of maximum position_x[NSTracker_SF_Index] = Hydro_Analysis_rho_max_loc position_x[NSTracker_SF_Index_Opposite] = - Hydro_Analysis_rho_max_loc >>> NSTracker::NSTracker_tracked_location = "Hydro_Analysis::Hydro_Analysis_rho_core_center_volume_weighted" # center of mass
Time after which to stop updating the SF
>>> NSTracker::NSTracker_stop_time = -1
Hydro_Analysis¶
This thorn provides basic hydro analysis routines.
Parameter¶
Look for the value and location of the maximum of rho
>>> Hydro_Analysis::Hydro_Analysis_comp_rho_max = "true" >>> Hydro_Analysis::Hydro_Analysis_average_multiple_maxima_locations = "yes" # when finding more than one global maximum location, average position >>> Hydro_Analysis::Hydro_Analysis_comp_rho_max_every = 32 # 2**(max_refinement_levels - 2)
Look for the location of the volume-weighted center of mass
>>> Hydro_Analysis::Hydro_Analysis_comp_vol_weighted_center_of_mass = "yes"
Look for the proper distance between the maximum of the density and the origin (along a straight coordinate line, not a geodesic)
>>> Hydro_Analysis::Hydro_Analysis_comp_rho_max_origin_distance = "yes" >>> ActiveThorns = "AEILocalInterp" >>> Hydro_Analysis::Hydro_Analysis_interpolator_name = "Lagrange polynomial interpolation (tensor product)"
Output¶
Coordinate location of the maximum of rho
>>> IOScalar::outScalar_vars = "Hydro_Analysis_rho_max" >>> IOScalar::outScalar_vars = "Hydro_Analysis::Hydro_Analysis_rho_max_loc"
coordinate location of the volume weightes center of mass
>>> IOScalar::outScalar_vars = "Hydro_Analysis_rho_center_volume_weighted"
proper distance between the maximum of the density and the origin (along a straight coordinate line)
>>> IOScalar::outScalar_vars = "Hydro_Analysis::Hydro_Analysis_rho_max_origin_distance"
Warning¶
Cannot get handle for interpolation ! Forgot to activate an implementation providing interpolation operators (e.g. LocalInterp)?
>>> ActiveThorns = "LocalInterp"
No driver thorn activated to provide an interpolation routine for grid arrays
>>> ActiveThorns = "CarpetInterp"
No handle found for interpolation operator ‘Lagrange polynomial interpolation (tensor product)’
>>> ActiveThorns = "AEILocalInterp"
No handle: ‘-2’ found for reduction operator ‘sum’
>>> ActiveThorns = "LocalReduce"
PunctureTracker¶
PunctureTracker track BH positions evolved with moving puncture techniques. The BH position is stored as the centroid of a spherical surface (even though there is no surface) provided by SphericalSurface.
Parameter¶
A spherical surface index where we can store the puncture location
>>> PunctureTracker::which_surface_to_store_info[0] = 0 >>> PunctureTracker::track [0] = yes >>> PunctureTracker::initial_x [0] = >>> PunctureTracker::which_surface_to_store_info[1] = 1 >>> PunctureTracker::track [1] = yes >>> PunctureTracker::initial_x [1] =
Warning¶
No handle found for interpolation operator ‘Lagrange polynomial interpolation’
>>> ActiveThorns = "AEILocalInterp"
Error
>>> ActiveThorns = "SphericalSurface" >>> SphericalSurface::nsurfaces = 2 >>> SphericalSurface::maxntheta = 66 >>> SphericalSurface::maxnphi = 124 >>> SphericalSurface::verbose = yes
Output¶
Location of punctures
>>> IOASCII::out0D_vars = "PunctureTracker::pt_loc"
CarpetInterp/CarpetInterp2¶
This thorn provides a parallel interpolator for Carpet.
CarpetReduce¶
This thorn provides parallel reduction operators for Carpet. This thorn now uses a weight function. The weight is 1 for all “normal” grid points.
Note
The weight is set to 0 on symmetry and possible the outer boundary, and it might be set to \(1/2\) on the edge of the boundary. The weight is also reduced or set to 0 on coarser grids that are overlaid by finer grid.
CarpetMask¶
Remove unwanted regions from Carpet’s reduction operations. This can be used to excise the interior of horizons.
Parameter¶
Exclude spherical surfaces with shrink factor
Note
iweight = 0 if dr <= radius_{sf} * excluded_surface_factor
>>> CarpetMask::excluded_surface[0] = 0 # index of excluded surface >>> CarpetMask::excluded_surface_factor[0] = 1
Initial Data¶
InitBase¶
Thorn InitBase specifies how initial data are to be set up.
Parameter¶
Procedure for setting up initial data [“init_some_levels”, “init_single_level”, “init_two_levels”, “init_all_levels”]:
>>> InitBase::initial_data_setup_method = "init_all_levels"
ADMBase¶
This thorn provides the basic variables used to communicate between thorns doing General Relativity in the 3+1 formalism.
- alp
- betax betay betaz
- dtalp
- dtbetax dtbetay dtbetaz
- gxx gyy gzz gxy gxz gyz
- kxx kyy kzz kxy kxz kyz
Parameter¶
Initial data value
>>> ADMBase::initial_data = "Meudon_Bin_NS" >>> ADMBase::initial_lapse = "Meudon_Bin_NS" >>> ADMBase::initial_shift = "zero" >>> ADMBase::initial_dtlapse = "Meudon_Bin_NS" >>> ADMBase::initial_dtshift = "zero"
evolution method
>>> ADMBase::evolution_method = "ML_BSSN" >>> ADMBase::lapse_evolution_method = "ML_BSSN" >>> ADMBase::shift_evolution_method = "ML_BSSN" >>> ADMBase::dtlapse_evolution_method = "ML_BSSN" >>> ADMBase::dtshift_evolution_method = "ML_BSSN"
Number of time levels
>>> ADMBase::lapse_timelevels = 3 >>> ADMBase::shift_timelevels = 3 >>> ADMBase::metric_timelevels = 3
Output¶
ADM
>>> CarpetIOHDF5::out2D_vars = "ADMBase::curv >>> ADMBase::lapse >>> ADMBase::metric >>> ADMBase::shift"
Warning¶
The variable “ADMBASE::alp” has only 1 active time levels, which is not enough for boundary prolongation of order 1
>>> ADMBase::lapse_timelevels = 3 >>> ADMBase::shift_timelevels = 3 >>> ADMBase::metric_timelevels = 3
StaticConformal¶
StaticConformal provides aliased functions to convert between physical and conformal 3-metric values.
- psi: Conformal factor
Parameter¶
Metric is conformal with static conformal factor, extrinsic curvature is physical
>>> ADMBase::metric_type = "static conformal"
HydroBase¶
HydroBase defines the primitive variables
- rho: rest mass density \(\rho\)
- press: pressure \(P\)
- eps: internal energy density \(\epsilon\)
- vel[3]: contravariant fluid three velocity \(v^{i}\)
- w_lorentz: Lorentz Factor \(W\)
- Y_e: electron fraction \(Y_e\)
- Abar: Average atomic mass
- temperature: temperature \(T\)
- entropy: specific entropy per particle \(s\)
- Bvec[3]: contravariant magnetic field vector defined as
- Avec[3]: Vector potential
- Aphi: Electric potential for Lorentz Gauge
Parameter¶
Number of time levels in evolution scheme
>>> InitBase::initial_data_setup_method = "init_all_levels" >>> HydroBase::timelevels = 3 rho[i] = 0.0; rho_p[i] = 0.0; rho_p_p[i] = 0.0;
The hydro initial value and evolution method (rho, vel, w_lorentz, eps)
>>> HydroBase::initial_hydro = "zero" >>> HydroBase::evolution_method = "none"
Other initial value and Evolution method
>>> HydroBase::initial_Avec = "none" >>> HydroBase::initial_Aphi = "none"
>>> HydroBase::initial_Bvec = "none" >>> HydroBase::Bvec_evolution_method = "none"
>>> HydroBase::initial_temperature = "none" >>> HydroBase::temperature_evolution_method = "none"
>>> HydroBase::initial_entropy = "none" >>> HydroBase::entropy_evolution_method = "none"
>>> HydroBase::initial_Abar = "none" >>> HydroBase::Abar_evolution_method = "none"
>>> HydroBase::initial_Y_e = "none" >>> HydroBase::Y_e_evolution_method = "none"
TmunuBase¶
Provide grid functions for the stress-energy tensor
Parameter¶
Should the stress-energy tensor have storage?
>>> TmunuBase::stress_energy_storage = yes
Should the stress-energy tensor be calculated for the RHS evaluation?
>>> TmunuBase::stress_energy_at_RHS = yes
Number of time levels
>>> TmunuBase::timelevels = 3
Meudon_Bin_NS¶
Import LORENE Bin_NS binary neutron star initial data.
Parameter¶
Input file name containing LORENE data
>>> Meudon_Bin_NS::filename = "resu.d"
Initial data EOS identifyer
>>> Meudon_Bin_NS::filename = >>> Meudon_Bin_NS::eos_table =
Seed_Magnetic_Fields¶
Since the LORENE code cannot yet compute magnetized BNS models.
The following sets up a vector potential of the form
where \(\varpi\) is the cylindrical radius: \(\sqrt{x^2+y^2}\), and \(X \in \{\rho, P\}\) is the variable P or \(\rho\) specifying whether the vector potential is proportional to P or \(\rho\) in the region greater than the cutoff.
This formulation assumes that \(A_r\) and \(A_\theta = 0\); only \(A_\phi\) can be nonzero. Thus the coordinate transformations are as follows:
Parameter¶
A-field prescription [“Pressure_prescription”, “Density_prescription”]:
>>> Seed_Magnetic_Fields::Afield_type = "Pressure_prescription"
Multiply \(A_\phi\) by \(\varpi^2\)?
>>> Seed_Magnetic_Fields::enable_varpi_squared_multiplication = "yes"
Magnetic field strength parameter.
>>> Seed_Magnetic_Fields::A_b = 1e-3
Cutoff pressure, below which vector potential is set to zero. Typically set to 4% of the maximum initial pressure.
>>> Seed_Magnetic_Fields::P_cut = 1e-5
Magnetic field strength pressure exponent \(A_\phi = \varpi^2 A_b max[(P - P_{cut})^{n_s}, 0]\).
>>> Seed_Magnetic_Fields::n_s = 1
Cutoff density, below which vector potential is set to zero. Typically set to 20% of the maximum initial density.
>>> Seed_Magnetic_Fields::rho_cut = 0.2 # If max density = 1.0
Define A fields on an IllinoisGRMHD staggered grid?
>>> Seed_Magnetic_Fields::enable_IllinoisGRMHD_staggered_A_fields = "yes"
Seed_Magnetic_Fields_BNS¶
Thorn Seed_Magnetic_Fields set seeds magnetic fields within a single star. This thorn simply extends the capability to two stars, centered at \((x_{1},0,0)\) and \((x_{2},0,0)\) (LORENE sets up the neutron stars along the x-axis by default).
Parameter¶
Magnetic field strength parameter.
>>> Seed_Magnetic_Fields_BNS::A_b = 1e-3
Cutoff pressure, below which vector potential is set to zero. Typically set to 4% of the maximum initial pressure.
>>> Seed_Magnetic_Fields_BNS::P_cut = 1e-5
Magnetic field strength pressure exponent.
>>> Seed_Magnetic_Fields_BNS::n_s = 1
Define A fields on an IllinoisGRMHD staggered grid?
>>> Seed_Magnetic_Fields_BNS::enable_IllinoisGRMHD_staggered_A_fields = "no"
Which field structure to use [“poloidal_A_interior”, “dipolar_A_everywhere”]:
>>> Seed_Magnetic_Fields_BNS::enable_IllinoisGRMHD_staggered_A_fields = "yes" # This requires a staggered grid >>> Seed_Magnetic_Fields_BNS::A_field_type = "poloidal_A_interior" # interior to the star >>> Seed_Magnetic_Fields_BNS::x_c1 = -15.2 # x coordinate of NS1 center >>> Seed_Magnetic_Fields_BNS::x_c2 = 15.2 # x coordinate of NS2 center >>> Seed_Magnetic_Fields_BNS::r_NS1 = 13.5 # Radius of NS1. Does not have to be perfect, but must not overlap other star. >>> Seed_Magnetic_Fields_BNS::r_NS2 = 13.5 # Radius of NS2
\[A_\phi = \varpi^2 A_b max[(P-P_{cut})^{n_s}, 0]\]>>> Seed_Magnetic_Fields_BNS::enable_IllinoisGRMHD_staggered_A_fields = "yes" # This requires a staggered grid >>> Seed_Magnetic_Fields_BNS::A_field_type = "dipolar_A_everywhere" >>> Seed_Magnetic_Fields_BNS::x_c1 = -15.2 # x coordinate of NS1 center >>> Seed_Magnetic_Fields_BNS::x_c2 = 15.2 # x coordinate of NS2 center >>> Seed_Magnetic_Fields_BNS::r_NS1 = 13.5 # Radius of NS1. Does not have to be perfect, but must not overlap other star. >>> Seed_Magnetic_Fields_BNS::r_NS2 = 13.5 # Radius of NS2 >>> Seed_Magnetic_Fields_BNS::r_zero_NS1 = 1.0 # Current loop radius >>> Seed_Magnetic_Fields_BNS::r_zero_NS2 = 1.0 >>> Seed_Magnetic_Fields_BNS::I_zero_NS1 = 0.0 # Magnetic field loop current of NS1 >>> Seed_Magnetic_Fields_BNS::I_zero_NS2 = 0.0
\[A_{\phi}=\frac{\pi r_{0}^{2} I_{0}}{\left(r_{0}^{2}+r^{2}\right)^{3 / 2}}\left(1+\frac{15 r_{0}^{2}\left(r_{0}^{2}+\varpi^{2}\right)}{8\left(r_{0}^{2}+r^{2}\right)^{2}}\right)\]
TwoPunctures¶
Create initial for two puncture black holes using a single domain spectral method.
Following York’s conformal-transverse-traceless decomposition method, we make the following assumptions for the metric and the extrinsic curvature
The initial data described by this method are conformally flat and maximally sliced, \(K = 0\). With this ansatz the Hamiltonian constraint yields an equation for the conformal factor \(\psi\)
while the momentum constraint yields an equation for the vector potential \(\boldsymbol{V}\)
Note
TwoPunctures Thorn is restricted to problems involving two punctures.
One can proceed by choosing a non-trivial analytic solution of the Bowen-York type for the momentum constraint,
Hamiltonian constraint is obtained by writing the conformal factor \(\psi\) as a sum of a singular term and a finite correction \(u\)
It is impossible to unambiguously define local black hole masses in general. In the following we choose the ADM mass
Here \(m_{+}\) and \(m_{-}\) are the values of \(u\) at each puncture.
Parameter¶
initial_data
>>> ADMBase::initial_data = "twopunctures" >>> ADMBase::initial_lapse = "twopunctures-averaged" >>> ADMBase::initial_shift = "zero" >>> ADMBase::initial_dtlapse = "zero" >>> ADMBase::initial_dtshift = "zero"
Coordinate of the puncture
>>> TwoPunctures::par_b = 5.0 >>> TwoPunctures::center_offset[0] = -0.538461538462
ADM mass of Black holes
>>> TwoPunctures::target_M_plus = 0.553846153846 >>> TwoPunctures::target_M_minus = 0.446153846154 >>> TwoPunctures::adm_tol = 1.0e-10 INFO (TwoPunctures): Attempting to find bare masses. INFO (TwoPunctures): Target ADM masses: M_p=0.553846 and M_m=0.446154 INFO (TwoPunctures): ADM mass tolerance: 1e-10 INFO (TwoPunctures): Bare masses: mp=1, mm=1 INFO (TwoPunctures): ADM mass error: M_p_err=0.500426421474965, M_m_err=0.607857653302066 . . . INFO (TwoPunctures): Bare masses: mp=0.518419372531011, mm=0.391923877275946 INFO (TwoPunctures): ADM mass error: M_p_err=2.35933494963092e-12, M_m_err=8.5276896655273e-11 INFO (TwoPunctures): Found bare masses. >>> TwoPunctures::target_M_plus = 0.553846153846 >>> TwoPunctures::target_M_minus = 0.446153846154 >>> TwoPunctures::par_m_plus = 0.553846153846 >>> TwoPunctures::par_m_minus = 0.446153846154 >>> TwoPunctures::adm_tol = 1.0e-10 INFO (TwoPunctures): Attempting to find bare masses. INFO (TwoPunctures): Target ADM masses: M_p=0.553846 and M_m=0.446154 INFO (TwoPunctures): ADM mass tolerance: 1e-10 INFO (TwoPunctures): Bare masses: mp=0.553846153846, mm=0.446153846154 INFO (TwoPunctures): ADM mass error: M_p_err=0.0334459078036595, M_m_err=0.0445419016377125 . . . INFO (TwoPunctures): Bare masses: mp=0.518419372531011, mm=0.391923877275946 INFO (TwoPunctures): ADM mass error: M_p_err=2.35933494963092e-12, M_m_err=8.5276896655273e-11 INFO (TwoPunctures): Found bare masses.
momentum of the puncture
>>> TwoPunctures::par_P_plus [1] = +0.3331917498 >>> TwoPunctures::par_P_minus[1] = -0.3331917498
spin of the puncture
>>> TwoPunctures::par_S_plus [1] = 0.0 >>> TwoPunctures::par_S_minus[1] = 0.0
A small number to smooth out singularities at the puncture locations
>>> TwoPunctures::TP_epsilon = 1e-6
Tiny number to avoid nans near or at the pucture locations
>>> TwoPunctures::TP_Tiny = 1.0e-2
Print screen output while solving
>>> TwoPunctures::verbose = yes INFO (TwoPunctures): Bare masses: mp=0.553846153846, mm=0.446153846154 Newton: it=0 |F|=7.738745e-02 bare mass: mp=0.553846 mm=0.446154 bicgstab: itmax 100, tol 7.738745e-05 bicgstab: 0 6.428e-01 bicgstab: 1 1.010e+00 1.021e+00 0.000e+00 6.116e-01 bicgstab: 2 7.551e-02 1.622e+00 1.531e-02 4.085e-01 bicgstab: 3 1.561e-02 2.836e-01 2.396e-02 8.846e-01 bicgstab: 4 7.358e-03 2.473e-01 -1.079e-01 9.778e-01 bicgstab: 5 3.429e-04 9.104e+00 -7.954e-01 4.003e-01 bicgstab: 6 6.564e-05 3.724e-01 -4.164e-01 1.293e+00 Newton: it=1 |F|=1.149396e-03 INFO (TwoPunctures): ADM mass error: M_p_err=0.0334459078036595, M_m_err=0.0445419016377125 >>> TwoPunctures::verbose = no INFO (TwoPunctures): Bare masses: mp=0.553846153846, mm=0.446153846154 INFO (TwoPunctures): ADM mass error: M_p_err=0.0334459078036595, M_m_err=0.0445419016377125
TOVSolver¶
This thorn provides initial data for TOV star(s) in isotropic coordinates. The Tolman-Oppenheimer-Volkoff solution is a static perfect fluid “star”.
Parameter¶
TOV star initial data
>>> ADMBase::initial_data = "tov" >>> ADMBase::initial_lapse = "tov" >>> ADMBase::initial_shift = "tov" >>> ADMBase::initial_dtlapse = "zero" >>> ADMBase::initial_dtshift = "zero"
Set up a TOV star described by a polytropic equation of state \(p=K \rho^{\mathrm{T}}\)
>>> TOVSolver::TOV_Rho_Central[0] = 1.28e-3 >>> TOVSolver::TOV_Gamma = 2.0 >>> TOVSolver::TOV_K = 100.0
Velocity of neutron star
>>> TOVSolver::TOV_Velocity_x[0] = 0.1 >>> TOVSolver::TOV_Velocity_y[0] = 0.2 >>> TOVSolver::TOV_Velocity_z[0] = 0.3
Two or more of TOVs
>>> Tovsolver::TOV_Num_TOVs = 2 >>> Tovsolver::TOV_Num_Radial = 200000 >>> Tovsolver::TOV_Combine_Method = "average" >>> Tovsolver::TOV_Rho_Central[0] = 0.16e-3 >>> Tovsolver::TOV_Position_x[0] = -15.0 >>> Tovsolver::TOV_Rho_Central[1] = 0.32e-3 >>> Tovsolver::TOV_Position_x[1] = 15.0
Exact¶
All of these exact spacetimes have been found useful for testing different aspect of the code.
This thorn sets up the 3+1 ADM variables for any of a number of exact spacetimes/coordinates. Optionally, any 4-metric can be Lorentz-boosted in any direction. As another option, the ADM variables can be calculated on an arbitrary slice through the spacetime, using arbitrary coordinates on the slice.
Parameter¶
The exact solution/coordinates
By default, this thorn sets up the ADM variables on an initial slice only. However, setting ADMBase::evolution_method so you get an exact spacetime, not just a single slice.
>>> ADMBase::evolution_method = "exact"
Boundary¶
Boundary¶
Provides a generic interface to boundary conditions, and provides a set of standard boundary conditions for one, two, and three dimensional grid variables.
- scalar boundary conditions
- flat boundary conditions
- radiation boundary conditions
- copy boundary conditions
- Robin boundary conditions
- static boundary conditions
Parameter¶
Register routine
>>> Boundary::register_scalar = "yes" >>> Boundary::register_flat = "yes" >>> Boundary::register_radiation = "yes" >>> Boundary::register_copy = "yes" >>> Boundary::register_robin = "yes" >>> Boundary::register_static = "yes" >>> Boundary::register_none = "yes"
Warning¶
The aliased function ‘SymmetryTableHandleForGrid’ (required by thorn ‘Boundary’) has not been provided by any active thorn !
>>> ActiveThorns = "SymBase"
Evolution¶
Time¶
Calculates the timestep used for an evolution by either
- setting the timestep directly from a parameter value
- using a Courant-type condition to set the timestep based on the grid-spacing used.
Parameter¶
The standard timestep condition dt = dtfac*max(delta_space)
>>> grid::dxyz = 0.3 >>> time::dtfac = 0.1 ---------------------------------- it | | WAVETOY::phi | | t | norm2 | ---------------------------------- 0 | 0.000 | 0.10894195 | 1 | 0.030 | 0.10892065 | 2 | 0.060 | 0.10885663 | 3 | 0.090 | 0.10874996 |
Absolute value for timestep
>>> time::timestep_method = "given" >>> time::timestep = 0.1 ---------------------------------- it | | WAVETOY::phi | | t | norm2 | ---------------------------------- 0 | 0.000 | 0.10894195 | 1 | 0.100 | 0.10870525 | 2 | 0.200 | 0.10799700 | 3 | 0.300 | 0.10682694 | >>> time::timestep_method = "given" >>> time::timestep = 0.2 ---------------------------------- it | | WAVETOY::phi | | t | norm2 | ---------------------------------- 0 | 0.000 | 0.10894195 | 1 | 0.200 | 0.10799478 | 2 | 0.400 | 0.10520355 | 3 | 0.600 | 0.10072358 |
MoL¶
The Method of Lines (MoL) converts a (system of) partial differential equation(s) into an ordinary differential equation containing some spatial differential operator. As an example, consider writing the hyperbolic system of PDE’s
in the alternative form
Given this separation of the time and space discretizations, well known stable ODE integrators such as Runge-Kutta can be used to do the time integration.
Parameter¶
chooses between the different methods.
>>> MoL::ODE_Method = "RK4" INFO (MoL): Using Runge-Kutta 4 as the time integrator.
controls the number of intermediate steps for the ODE solver. For the generic Runge-Kutta solvers it controls the order of accuracy of the method.
>>> MoL::MoL_Intermediate_Steps = 4
controls the amount of scratch space used.
>>> MoL::MoL_Num_Scratch_Levels = 1
Warning¶
When using the efficient RK4 evolver the number of intermediate steps must be 4, and the number of scratch levels at least 1.
>>> MoL::MoL_Intermediate_Steps = 4 >>> MoL::MoL_Num_Scratch_Levels = 1
Dissipation¶
Add fourth order Kreiss-Oliger dissipation to the right hand side of evolution equations.
The additional dissipation terms appear as follows
where \(h_{x}\), \(h_{y}\), and \(h_{z}\) are the local grid spacings in each Cartesian direction. \(\epsilon\) is a positive, adjustable parameter controlling the amount of dissipation added, and must be less that 1 for stability.
Parameter¶
Dissipation order and strength
>>> Dissipation::order = 5 >>> Dissipation::epsdis = 0.1
Note
Currently available values of order are \(p \in\{1,3,5,7,9\}\). To apply dissipation at order p requires that we have at least \((p + 1) / 2\) ghostzones respectively.
List of evolved grid functions that should have dissipation added
>>> Dissipation::vars = "ML_BSSN::ML_log_confac ML_BSSN::ML_metric ML_BSSN::ML_trace_curv ML_BSSN::ML_curv ML_BSSN::ML_Gamma ML_BSSN::ML_lapse ML_BSSN::ML_shift ML_BSSN::ML_dtlapse ML_BSSN::ML_dtshift"
ML_BSSN¶
The code is designed to handle arbitrary shift and lapse conditions. Gauges are the commonly used \(1 + log\) and \(\tilde{\Gamma}\)-driver conditions with advection terms.
The hyperbolic K-driver slicing conditions have the form
The hyperbolic Gamma-driver condition have the form
where \(F\) and \(\eta\) are, in general, positive functions of space and time. We typically choose \(F = 3/4\). For some reason, a simple space-varying prescription for \(\eta\) is implemented
This is a generalization of many well known slicing and shift conditions.
Parameter¶
Evolution method
>>> ADMBase::evolution_method = "ML_BSSN" >>> ADMBase::lapse_evolution_method = "ML_BSSN" >>> ADMBase::shift_evolution_method = "ML_BSSN" >>> ADMBase::dtlapse_evolution_method = "ML_BSSN" >>> ADMBase::dtshift_evolution_method = "ML_BSSN"
K-driver slicing conditions: \(\frac{d \alpha}{dt} = - f \alpha^{n} K\)
>>> ML_BSSN::harmonicN = 1 >>> ML_BSSN::harmonicF = 2.0 [1+log slicing condition]
Gamma-driver condition: \(F\)
>>> ML_BSSN::useSpatialShiftGammaCoeff = 0 >>> ML_BSSN::ShiftGammaCoeff = <F>
\[F(r) = F\]>>> ML_BSSN::useSpatialShiftGammaCoeff = 1 >>> ML_BSSN::ShiftGammaCoeff = <F> >>> ML_BSSN::spatialShiftGammaCoeffRadius = 50
\[F(r) = Min[1, e^{1 - \frac{r}{R}}] \times F\]Gamma-driver condition: \(\eta\)
>>> ML_BSSN::useSpatialBetaDriver = 0 >>> ML_BSSN::BetaDriver = <eta>
\[\eta(r) = \eta\]>>> ML_BSSN::useSpatialBetaDriver = 1 >>> ML_BSSN::BetaDriver = <eta> >>> ML_BSSN::spatialBetaDriverRadius = <R>
\[\eta(r) = \frac{R}{Max[r, R]} \times \eta\]Enable spatially varying betaDriver
>>> ML_BSSN::useSpatialBetaDriver = 1
Advect Lapse and shift?
>>> ML_BSSN::advectLapse = 1 >>> ML_BSSN::advectShift = 1
Boundary condition for BSSN RHS and some of the ADMBase variables.
>>> ML_BSSN::rhs_boundary_condition = "scalar"
Whether to apply dissipation to the RHSs
>>> ML_BSSN::epsDiss = 0.0 >>> >>> Dissipation::epsdis = 0.1 >>> Dissipation::order = 5 >>> Dissipation::vars = "ML_BSSN::ML_log_confac >>> ML_BSSN::ML_metric >>> ML_BSSN::ML_trace_curv >>> ML_BSSN::ML_curv >>> ML_BSSN::ML_Gamma >>> ML_BSSN::ML_lapse >>> ML_BSSN::ML_shift >>> ML_BSSN::ML_dtlapse >>> ML_BSSN::ML_dtshift"
Enforced minimum of the lapse function
>>> ML_BSSN::MinimumLapse = 1.0e-8
Finite differencing order
>>> ML_BSSN::fdOrder = 4
Warning¶
Insufficient ghost or boundary points for ML_BSSN_InitialADMBase2Interior
>>> ML_BSSN::fdOrder = 8 >>> driver::ghost_size = 5 >>> CoordBase::boundary_size_x_lower = 5 >>> CoordBase::boundary_size_y_lower = 5 >>> CoordBase::boundary_size_z_lower = 5 >>> CoordBase::boundary_size_x_upper = 5 >>> CoordBase::boundary_size_y_upper = 5 >>> CoordBase::boundary_size_z_upper = 5
Range error setting parameter ‘ML_BSSN::initial_boundary_condition’ to ‘extrapolate-gammas’
>>> ActiveThorns = "ML_BSSN_Helper CoordGauge"
ML_BSSN_Helper¶
Warning¶
The function ExtrapolateGammas has not been provided by any active thorn.
>>> ActiveThorns = "NewRad"
illinoisGRMHD¶
IllinoisGRMHD solves the equations of General Relativistic MagnetoHydroDynamics (GRMHD) using a high-resolution shock capturing scheme and the Piecewise Parabolic Method (PPM) for reconstruction.
IllinoisGRMHD evolves the vector potential \(A_{\mu}\) (on staggered grids) instead of the magnetic fields (\(B^i\)) directly, to guarantee that the magnetic fields will remain divergenceless even at AMR boundaries.
IllinoisGRMHD currently implements a hybrid EOS of the form
where \(P_{\text {cold}}\) and \(\epsilon_{\text {cold}}\) denote the cold component of \(P\) amd \(\epsilon\) respectively, and \(\Gamma_{\text {th}}\) is a constant parameter which determines the conversion efficiency of kinetic to thermal energy at shocks. The function \(\epsilon_{\text {cold }}(\rho_{0})\) is related to \(P_{\text {cold }}\left(\rho_{0}\right)\) by the first law of thermodynamics,
The \(\Gamma\)-law EOS \(P=(\Gamma-1) \rho_{0} \epsilon\) is adopted. This corresponds to setting \(P_{\text {cold }}=(\Gamma-1) \rho_{0} \epsilon_{\text {cold }}\), which is equivalent to \(P_{\text {cold }}=\kappa \rho_{0}^{\Gamma}\) (with constant \(\kappa\)), and \(\Gamma_{\mathrm{th}}=\Gamma\). In the absence of shocks and in the initial data \(\epsilon=\epsilon_{\mathrm{cold}}\) and \(P=P_{\mathrm{cold}}\)
Parameter¶
Determines how much evolution information is output
>>> IllinoisGRMHD::verbose = "no"
>>> IllinoisGRMHD::verbose = "essential"
>>> IllinoisGRMHD::verbose = "essential+iteration output"
Floor value on the energy variable tau and the baryonic rest mass density
>>> IllinoisGRMHD::tau_atm >>> IllinoisGRMHD::rho_b_atm
Hybrid EOS
>>> IllinoisGRMHD::gamma_th = 2.0 >>> IllinoisGRMHD::K_poly =
Chosen Matter and EM field boundary condition
>>> IllinoisGRMHD::EM_BC = "outflow" >>> IllinoisGRMHD::Matter_BC = "copy"
>>> IllinoisGRMHD::EM_BC = "frozen" >>> IllinoisGRMHD::Matter_BC = "frozen" #If Matter_BC or EM_BC is set to FROZEN, BOTH must be set to frozen!
Debug. If the primitives solver fails, this will output all data needed to debug where and why the solver failed.
>>> IllinoisGRMHD::conserv_to_prims_debug = 1
ID_conerter_ILGRMHD¶
IllinoisGRMHD and HydroBase variables are incompatible. The former uses 3-velocity defined as \(v^i = u^i/u^0\), and the latter uses the Valencia formalism definition of \(v^i\) as measured by normal observers, defined as:
In addition, IllinoisGRMHD needs the A-fields to be defined on staggered grids, and HydroBase does not yet support this option.
Parameter¶
Single Gamma-law EOS:
>>> ID_converter_ILGRMHD::Gamma_Initial = 2.0 >>> ID_converter_ILGRMHD::K_Initial = 123.64110496340211
convert_to_HydroBase¶
Convert IllinoisGRMHD-compatible variables to HydroBase-compatible variables. Used for compatibility with HydroBase/ADMBase analysis thorns in the Einstein Toolkit.
Parameter¶
How often to convert IllinoisGRMHD primitive variables to HydroBase primitive variables? This needed for particle tracer.
>>> convert_to_HydroBase::convert_to_HydroBase_every = 0
Analysis¶
ADMAnalysis¶
This thorn does basic analysis of the metric and extrinsic curvature tensors.
It calculates
- the trace of the extrinsic curvature (trK)
- the determinant of the metric (detg)
- the components of the metric in spherical coordinates (grr,grq,grp,gqq,gqp,gpp)
- the components of the extrinsic curvature in spherical coordinates (Krr,Krq,Krp,Kqq,Kqp,Kpp)
- components of the Ricci tensor
- the Ricci scalar
(q refers to the theta coordinate and p to the phi coordinate.)
Parameter¶
Project angular components onto \(r*dtheta\) and \(r*sin(theta)*dphi\)
>>> ADMAnalysis::normalize_dtheta_dphi = "yes"
Warning¶
- Cannot output variable because it has no storage
Output¶
The Ricci scalar
>>> IOHDF5::out2D_vars = "ADMAnalysis::ricci_scalar"
ADMMass¶
Thorn ADMMass can compute the ADM mass from quantities in ADMBase.
Note
ADMMass uses the ADMMacros for derivatives. Thus, converegence of results is limited to the order of these derivatives (ADMMacros::spatial_order).
Parameter¶
Number of measurements
>>> ADMMass::ADMMass_number = 1
Note
Multiple measurements can be done for both volume and surface integral, but the limit for both is 100 (change param.ccl if you need more).
ADMMass provides several possibilities to specify the (finite) integration domain for surface integral
>>> ADMMass::ADMMass_surface_distance[0] = 12
ADMMass provides several possibilities to specify the (finite) integration domain for volume integral
>>> ADMMass::ADMMass_use_all_volume_as_volume_radius = "yes" Use the whole grid for volume integration >>> ADMMass::ADMMass_use_surface_distance_as_volume_radius = "yes" ADMMass_surface_distance is used to specify the integration radius. >>> ADMMass::ADMMass_volume_radius[0] =12 ADMMass_volume_radius specifies this radius.
Should we exclude the region inside the AH to the volume integral?
>>> ADMMass::ADMMass_Excise_Horizons = "yes"
Warning¶
WARNING[L2,P0] (ADMMass): radius < 0 / not set, not calculating the volume integral to get the ADM mass.
>>> ADMMass::ADMMass_surface_distance[0] = 12
Output¶
The results for the volume integral, the usual surface integral and the sorface integral including the lapse
>>> IOASCII::out0D_vars = "ADMMass::ADMMass_Masses"
smallbPoynET¶
Using HydroBase velocity and magnetic field variables, as well as ADMBase spacetime metric variables, smallbPoynET computes \(b^i\), and three spatial components of Poynting flux. It also computes \(-1 - u_{0}\), which is useful for tracking unbound matter.
Parameter¶
How often to compute smallbPoyn?
>>> smallbPoynET::smallbPoynET_compute_every = 0
Output¶
Poynting flux
>>>
\(b^i\)
>>> CarpetIOHDF5::out2D_vars = "smallbPoynET::smallb2 smallbPoynET::smallbx smallbPoynET::smallby smallbPoynET::smallbz"
>>> CarpetIOHDF5::out2D_vars = "smallbPoynET::minus_one_minus_u_0"
particle_tracerET¶
This thorn provides essential functionality for self-consistent visualizations of the dynamics of magnetic field lines over time in a GRMHD simulation.
ML_ADMConstraints¶
ML_ADMConstraints calculates the ADM constraints violation, but directly using potentially higher-order derivatives, and is, in general, preferred over ADMConstraints.
Output¶
Hamiltonian constraint
>>> IOHDF5::out2D_vars = "ML_ADMConstraints::ML_Ham"
Momentum constraints
>>> IOHDF5::out2D_vars = "ML_ADMConstraints::ML_mom"
QuasiLocalMeasures¶
Calculate quasi-local measures such as masses, momenta, or angular momenta and related quantities on closed two-dimentional surfaces, including on horizons.
Parameter¶
Input a surface that the user specifies and can calculate useful quantities
>>> QuasiLocalMeasures::num_surfaces = 1 >>> QuasiLocalMeasures::spatial_order = 4 >>> QuasiLocalMeasures::interpolator = "Lagrange polynomial interpolation" >>> QuasiLocalMeasures::interpolator_options = "order=4" >>> QuasiLocalMeasures::surface_name[0] = "waveextract surface at 100"
Output¶
Scalar quantities on the surface
>>> IOASCII::out0D_vars = "QuasiLocalMeasures::qlm_scalars"
AHFinderDirect¶
In 3+1 numerical relativity, it’s often useful to know the positions and shapes of any black holes in each slice.
Finding Apparent Horizons in a numerical spacetime. It calulates various quantities like horizon area and its corresponding mass.
Note
The main complication here is that AHFinderDirect needs an initial guess for an AH shape, and if this initial guess is inaccurate AHFinderDirect may fail to find the AH.
Parameter¶
How often should we try to find apparent horizons?
>>> AHFinderDirect::find_every = 128 # every course
Number of apparent horizons to search for
>>> AHFinderDirect::N_horizons = 2
Move the origins with the horizons
>>> AHFinderDirect::move_origins = yes
Which surface should we store the info?
>>> AHFinderDirect::origin_x [1] = >>> AHFinderDirect::initial_guess__coord_sphere__x_center[1] = >>> AHFinderDirect::initial_guess__coord_sphere__radius [1] = >>> AHFinderDirect::which_surface_to_store_info [1] = 2 >>> AHFinderDirect::track_origin_source_x [1] = "PunctureTracker::pt_loc_x[0]" >>> AHFinderDirect::track_origin_source_y [1] = "PunctureTracker::pt_loc_y[0]" >>> AHFinderDirect::track_origin_source_z [1] = "PunctureTracker::pt_loc_z[0]" >>> AHFinderDirect::max_allowable_horizon_radius [1] = 3
Multipole¶
Multipole thorn can decompose multiple grid functions with any spin-weight on multiple spheres. A set of radii for these spheres, as well as the number of angular points to use, can be specified.
The angular dependence of a field \(u(t, r, \theta, \varphi)\) can be expanded in spin-weight s spherical harmonics
where the coefficients \(C^{l m}(t, r)\) are given by
Parameter¶
Decide the number and radii of the coordinate spheres on which you want to decompose.
>>> Multipole::nradii = 3 >>> Multipole::radius[0] = 10 >>> Multipole::radius[1] = 20 >>> Multipole::radius[2] = 30 >>> Multipole::variables = "MyThorn::u"
How many points in the theta and phi direction?
>>> Multipole::ntheta = 120 >>> Multipole::nphi = 240
The maximum l mode to extract
>>> Multipole::l_max = 8
Output an HDF5 file for each variable containing one dataset per mode at each radius
>>> Multipole::output_hdf5 = yes
Numerical¶
Provides numerical infrastructure thorns.
SpaceMask¶
The mask is a grid function which can be used to assign a state to each point on the grid. It is used as a bit field, with different bits, or combinations of bits, assigned to represent various states.
For instance, a programmer wants to record whether each point of the grid has state “interior”, “excised” or “boundary”. 2-bits of the mask (enough to represent the three possible states of the type) are allocated to hold this information. If a new type is required, bits are allocated to it from the remaining free bits in the mask.
Parameter¶
Turn on storage for mask
>>> SpaceMask::use_mask = "yes"
SphericalSurface¶
SphericalSurface defines two-dimensional surfaces with spherical topology. The thorn itself only acts as a repository for other thorns to set and retrieve such surfaces, making it a pure infrastructure thorn. One thorn can update a given spherical surface with information, and another thorn can read that information without having to know about the first thorn.
Parameter¶
Number of surfaces
>>> SphericalSurface::nsurfaces = 2
Surface Definition: Maximum number of grid points in the theta amd phi direction
>>> SphericalSurface::maxntheta = 39 >>> SphericalSurface::maxnphi = 76
Surface Definition and set resolution according to given parameters. Some of spherical surface index may be used by PunctureTracker.
>>> SphericalSurface::name [0] = "Righthand NS" >>> SphericalSurface::ntheta [0] = 39 # must be at least 3*nghoststheta >>> SphericalSurface::nphi [0] = 76 # must be at least 3*nghostsphi >>> SphericalSurface::nghoststheta[0] = 2 >>> SphericalSurface::nghostsphi [0] = 2
Place surface at a certain radius
>>> SphericalSurface::set_spherical[0] = yes >>> SphericalSurface::radius [0] = 250
Utility¶
NaNChecker¶
The NaNChecker thorn can be used to analyze Cactus grid variables (that is grid functions, arrays or scalars) for NaN (Not-a-Number) and infinite values.
Parameter¶
How often to check for NaNs
>>> NaNChecker::check_every = 128
Groups and/or variables to check for NaNs
>>> NaNChecker::check_vars = "all" # List of full group and/or variable names, or 'all' for everything [1mWARNING level 1 from host ubuntu process 0 while executing schedule bin NaNChecker_NaNCheck, routine NaNChecker::NaNChecker_NaNCheck_Check in thorn NaNChecker, file /home4/yuliu/Cactus/arrangements/CactusUtils/NaNChecker/src/NaNCheck.cc:875: ->[0m There were 142 NaN/Inf value(s) found in variable 'HYDROBASE::rho'
What to do if a NaN was found
>>> NaNChecker::action_if_found = "terminate" [1mWARNING level 1 from host ubuntu process 0 while executing schedule bin CCTK_POSTSTEP, routine NaNChecker::NaNChecker_TakeAction in thorn NaNChecker, file /home4/yuliu/Cactus/arrangements/CactusUtils/NaNChecker/src/NaNCheck.cc:251: ->[0m 'action_if_found' parameter is set to 'terminate' - scheduling graceful termination of Cactus >>> NaNChecker::action_if_found = "just warn"
Tracking and Visualizing NaNs Positions
>>> NaNChecker::out_NaNmask = "yes" >>> NaNChecker::out_NaNmask = "no"
SystemStatistics¶
Thorn SystemStatistics collects information about the system on which a Cactus process is running and stores it in Cactus variables. These variables can then be output and reduced using the standard Cactus methods such as IOBasic and IOScalar.
Output¶
Run memory statistics in MB
>>> IOBasic::outInfo_vars = "SystemStatistics::maxrss_mb{reductions = 'maximum'}" >>> IOScalar::outScalar_vars = "SystemStatistics::process_memory_mb" ------------------------------- Iteration Time | *axrss_mb | maximum ------------------------------- 0 0.000 | 166 32 1.000 | 172 64 2.000 | 172 96 3.000 | 172 [systemstatistics-process_memory_mb.maximum.asc]
Trigger¶
Trigger can be used to change parameters depending on data from the simulation.
TerminationTrigger¶
This thorn watches the elapsed walltime. If only n minutes are left before the some limit set by the user, it triggers termination of the simulation.
Parameter¶
Walltime in HOURS allocated for this job
>>> TerminationTrigger::max_walltime = 1
When to trigger termination in MINUTES
>>> TerminationTrigger::on_remaining_walltime = 1
Output remaining wall time every n minutes
>>> TerminationTrigger::output_remtime_every_minutes = 5
Create an empty termination file at startup
>>> TerminationTrigger::create_termination_file = yes
I/O¶
In Carpet, a local grid (a “cuboid” that has a uniform spacing in each axis, and lives on a single processor) has a number of attributes:
- reflevel - This is an integer specifing the grid’s “refinement level” in the Berger-Oliger algorithm.
- map - This is an integer specifying the “map” (grid patch) at this refinement level.
- component - This is an integer specifying one of the local grids in this map/patch.
IOUtil¶
Thorns providing IO methods typically have string parameters which list the variables which should be output, how frequently (i.e. how many iterations between output), and where the output should go.
Parameter¶
The name of the directory to be used for output.
>>> IO::out_dir = $parfile
How often, in terms of iterations, each of the Cactus I/O methods will write output.
>>> IO::out_every = 2 ------------------------------ it | | *::coarse_dx | | t | scalar value | ------------------------------ 0 | 0.000 | 0.25000000 | 2 | 2.000 | 0.25000000 | 4 | 4.000 | 0.25000000 | 6 | 6.000 | 0.25000000 | 8 | 8.000 | 0.25000000 |
writing to file is performed only by processor zero. This processor gathers all the output data from the other processors and then writes to a single file.
>>> IO::out_mode = "onefile"
Every processor writes its own chunk of data into a separate output file.
>>> IO::out_mode = "proc"
Note
For a run on multiple processors, scalar, 1D, and 2D output will always be written from only processor zero (that is, required data from all other processors will be sent to processor zero, which then outputs all the gathered data). For full-dimensional output of grid arrays this may become a quite expensive operation since output by only a single processor will probably result in an I/O bottleneck and delay further computation. For this reason Cactus offers different I/O modes for such output which can be controlled by the IO::out_mode parameter, in combination with IO::out_unchunked and IO::out_proc_every.
Checkpointing
>>> IO::checkpoint_ID = "yes" # Checkpoint initial data INFO (CarpetIOHDF5): Dumping initial checkpoint at iteration 0, simulation time 0 >>> IO::checkpoint_every = 1 # How often to checkpoint >>> IO::checkpoint_on_terminate = "yes" # Checkpoint after last iteration INFO (CarpetIOHDF5): Dumping termination checkpoint at iteration 2432, simulation time 47.5 >>> IO::checkpoint_dir = "../checkpoints" # Output directory for checkpoint files [checkpoint.chkpt.it_0.file_0.h5] [checkpoint.chkpt.it_0.file_1.h5] . . . [checkpoint.chkpt.it_128.file_0.h5] . . .
Recover
>>> IO::recover_dir = "../checkpoints" # Directory to look for recovery files >>> IO::recover = "autoprobe"
Warning¶
No driver thorn activated to provide storage for variables
>>> ActiveThorns = "CarpetSlab" AMR driver provided by Carpet >>> ActiveThorns = "PUGHSlab" Driver provided by PUGH
IOBasic¶
Thorn IOBasic provides I/O methods for outputting scalar values in ASCII format into files and for printing them as runtime information to screen.
- This method outputs the information into ASCII files named “<scalar_name>.{asc|xg}” (for CCTK_SCALAR variables) and “<var_name>_<reduction>.{asc|xg}” (for CCTK_GF and CCTK_ARRAY variables where reduction would stand for the type of reduction operations (eg. minimum, maximum, L1, and L2 norm)
- This method prints the data as runtime information to stdout. The output occurs as a table with columns containing the current iteration number, the physical time at this iteration, and more columns for scalar/reduction values of each variable to be output.
Reduction Operations¶
The minimum of the values
\[\min :=\min _{i} a_{i}\]The maximum of the values
\[\max :=\max _{i} a_{i}\]The norm1 of the values
\[\frac{\Sigma\left|a_{i}\right|}{count}\]The norm2 of the values
\[\sqrt{\frac{\sum_{i}\left|a_{i}\right|^{2}}{count}}\]
Parameter¶
Print the information of CCTK_SCALAR variables
>>> IOBasic::outInfo_vars = "grid::coarse_dx" ------------------------------- it | | *::coarse_dx | | t | scalar value | ------------------------------- 0 | 0.000 | 0.25000000 |
Print the information of CCTK_GF and CCTK_ARRAY variables with the type of reduction
>>> IOBasic::outInfo_vars = "wavetoy::phi" >>> IOBasic::outInfo_reductions = "minimum maximum" ---------------------------------------------- it | | WAVETOY::phi | | t | minimum | maximum | ---------------------------------------------- 0 | 0.000 | 7.104375e-13 | 0.99142726 | >>> IOBasic::outInfo_vars = "wavetoy::phi{reductions = 'norm2'}" ------------------------------- it | | WAVETOY::phi | | t | norm2 | ------------------------------- 0 | 0.000 | 0.10894195 |
Outputs CCTK_SCALAR variabless into ASCII files
>>> IOBasic::outScalar_vars = "grid::coarse_dx" [~/simulations/example/output-0000/example/coarse_dx.xg] "Parameter file /home4/yuliu/simulations/example/output-0000/example.par "Created Sep 05 2019 05:05:37-0400 "x-label time "y-label GRID::coarse_dx "coarse_dx v time 0.0000000000000 0.2500000000000
Warning¶
WARNING[L1,P0] (IOBasic): Unknown reduction operator ‘minimum’. Maybe you forgot to activate thorn LocalReduce? (Driver provided by Carpet)
>>> ActiveThorns = "CarpetIOBasic CarpetReduce"
IOASCII¶
Thorn IOASCII provides I/O methods for 1D, 2D, and 3D output of grid arrays and grid functions into files in ASCII format.
Parameter¶
Outputs CCTK_GF and CCTK_ARRAY variables into ASCII files
>>> IOASCII::out1D_every = 1 >>> IOASCII::out1D_style = "gnuplot f(x)" >>> IOASCII::out1D_vars = "wavetoy::phi" [~/simulations/example1/output-0000/example1/phi_x_[1][1].asc] #Parameter file /home4/yuliu/simulations/example/output-0000/example.par #Created Sep 07 2019 03:55:52-0400 #x-label x #y-label WAVETOY::phi (y = 0.1500000000000, z = 0.1500000000000), (yi = 1, zi = 1) #Time = 0.0000000000000 -0.1500000000000 0.9914272633971 0.1500000000000 0.9914272633971 0.4500000000000 0.9689242170281 0.7500000000000 0.9254388283880 . . .
Warning¶
The aliased function ‘Hyperslab_GetList’ (required by thorn ‘IOASCII’) has not been provided by any active thorn ! (Driver provided by Carpet)
>>> ActiveThorns = "CarpetIOASCII"
CarpetIOBasic¶
This thorn provides info output for Carpet.
Parameter¶
Variables to output in scalar form
>>> IOBasic::outInfo_vars = "ADMBase::gxx" ----------------------------------------------- Iteration Time | ADMBASE::gxx | minimum maximum ----------------------------------------------- 0 0.000 | 1.0000000 1.0000000
Warning¶
Reduction operator “maximum” does not exist (maybe there is no reduction thorn active?)
>>> ActiveThorns = "CarpetReduce"
CarpetIOScalar¶
This thorn provides scalar output for Carpet.
Parameter¶
Variables to output in scalar form
>>> IOScalar::outScalar_vars = ""
Write one file per group instead of per variable
>>> IOScalar::one_file_per_group = yes
CarpetIOASCII¶
This thorn provides ASCII output for Carpet. The CarpetIOASCII I/O methods can output any type of CCTK grid variables (grid scalars, grid functions, and grid arrays of arbitrary dimension); data is written into separate files named “<varname>.asc”.
It reproduces most of the functionality of thorn IOASCII from the standard CactusBase arrangement. Where possible the names of parameters and their use is identical. However, this thorn outputs considerably more information than the standard IOASCII thorn. Information about, e.g., the refinement level and the index position of the output are also given. All the output can be visualized using gnuplot.
Parameter¶
Variables to output in 1D ASCII file format
>>> IOASCII::out1D_vars = "ADMBase::gxx" [~/simulations/example/output-0000/example/gxx.x.asc] # 1D ASCII output created by CarpetIOASCII # created on ubuntu by yuliu on Sep 10 2019 at 03:33:33-0400 # parameter filename: "/home4/yuliu/simulations/example/output-0000/example.par" # # gxx x (gxx) # # iteration 0 time 0 # time level 0 # refinement level 0 multigrid level 0 map 0 component 0 # column format: 1:it 2:tl 3:rl 4:c 5:ml 6:ix 7:iy 8:iz 9:time 10:x 11:y 12:z 13:data . . . >>> IOASCII::out2D_vars = "ADMBase::gxx" [~/simulations/example/output-0000/example/gxx.xy.asc] # 2D ASCII output created by CarpetIOASCII # created on ubuntu by yuliu on Sep 10 2019 at 04:14:22-0400 # parameter filename: "/home4/yuliu/simulations/example/output-0000/example.par" # # gxx x y (gxx) # # iteration 0 time 0 # time level 0 # refinement level 0 multigrid level 0 map 0 component 0 # column format: 1:it 2:tl 3:rl 4:c 5:ml 6:ix 7:iy 8:iz 9:time 10:x 11:y 12:z 13:data 0 0 0 0 0 0 0 12 0 -12 -12 0 1 0 0 0 0 0 1 0 12 0 -11 -12 0 1 0 0 0 0 0 2 0 12 0 -10 -12 0 1 . . . 0 0 0 0 0 0 1 12 0 -12 -11 0 1 0 0 0 0 0 1 1 12 0 -11 -11 0 1 0 0 0 0 0 2 0 12 0 -10 -11 0 1 . . . 0 0 0 0 0 0 2 12 0 -12 -10 0 1 0 0 0 0 0 1 2 12 0 -11 -10 0 1 0 0 0 0 0 2 2 12 0 -10 -10 0 1 >>> IOASCII::out3D_vars = "ADMBase::gxx" [~/simulations/example/output-0000/example.par] # 3D ASCII output created by CarpetIOASCII # created on ubuntu by yuliu on Sep 10 2019 at 04:19:51-0400 # parameter filename: "/home4/yuliu/simulations/example/output-0000/example.par" # # gxx x y z (gxx) # # iteration 0 time 0 # time level 0 # refinement level 0 multigrid level 0 map 0 component 0 # column format: 1:it 2:tl 3:rl 4:c 5:ml 6:ix 7:iy 8:iz 9:time 10:x 11:y 12:z 13:data 0 0 0 0 0 0 0 0 0 -12 -12 -12 1 0 0 0 0 0 1 0 0 0 -11 -12 -12 1 0 0 0 0 0 2 0 0 0 -10 -12 -12 1 . . . 0 0 0 0 0 0 1 0 0 -12 -11 -12 1 0 0 0 0 0 1 1 0 0 -11 -11 -12 1 0 0 0 0 0 2 1 0 0 -10 -11 -12 1 . . . 0 0 0 0 0 0 2 0 0 -12 -10 -12 1 0 0 0 0 0 1 2 0 0 -11 -10 -12 1 0 0 0 0 0 2 2 0 0 -10 -10 -12 1 . . . 0 0 0 0 0 0 0 1 0 -12 -12 -11 1 0 0 0 0 0 1 0 1 0 -11 -12 -11 1 0 0 0 0 0 2 0 1 0 -10 -12 -11 1 . . . 0 0 0 0 0 0 1 0 0 -12 -11 -11 1 0 0 0 0 0 1 1 0 0 -11 -11 -11 1 0 0 0 0 0 2 1 0 0 -10 -11 -11 1 . . . 0 0 0 0 0 0 2 0 0 -12 -10 -11 1 0 0 0 0 0 1 2 0 0 -11 -10 -11 1 0 0 0 0 0 2 2 0 0 -10 -10 -11 1 . . . 0 0 0 0 0 0 0 1 0 -12 -12 -10 1 0 0 0 0 0 1 0 1 0 -11 -12 -10 1 0 0 0 0 0 2 0 1 0 -10 -12 -10 1 . . . 0 0 0 0 0 0 1 0 0 -12 -11 -10 1 0 0 0 0 0 1 1 0 0 -11 -11 -10 1 0 0 0 0 0 2 1 0 0 -10 -11 -10 1 . . . 0 0 0 0 0 0 2 0 0 -12 -10 -10 1 0 0 0 0 0 1 2 0 0 -11 -10 -10 1 0 0 0 0 0 2 2 0 0 -10 -10 -10 1
Write one file per group instead of per variable
>>> IOASCII::out3D_vars = "ADMBase::gxx" >>> IOASCII::one_file_per_group = yes [~/simulations/example/output-0000/example/admbase-metric.xyz.asc] # 3D ASCII output created by CarpetIOASCII # created on ubuntu by yuliu on Sep 10 2019 at 04:28:57-0400 # parameter filename: "/home4/yuliu/simulations/example/output-0000/example.par" # # ADMBASE::METRIC x y z (admbase-metric) # # iteration 0 time 0 # time level 0 # refinement level 0 multigrid level 0 map 0 component 0 # column format: 1:it 2:tl 3:rl 4:c 5:ml 6:ix 7:iy 8:iz 9:time 10:x 11:y 12:z 13:data # data columns: 13:gxx 14:gxy 15:gxz 16:gyy 17:gyz 18:gzz >>> IOASCII::out3D_vars = "ADMBase::gxx" >>> IOASCII::one_file_per_group = no [~/simulations/example/output-0000/example/gxx.xyz.asc]
CarpetIOHDF5¶
Thorn CarpetIOHDF5 provides HDF5-based output to the Carpet mesh refinement driver in Cactus. The CarpetIOHDF5 I/O method can output any type of CCTK grid variables (grid scalars, grid functions, and grid arrays of arbitrary dimension); data is written into separate files named “<varname>.h5”. HDF5 is highly recommended over ASCII for performance and storage-size reasons.
Note
The default is to output distributed grid variables in parallel, each processor writing a file <varname>.file_<processor ID>.h5. Unchunked means that an entire Cactus grid array (gathered across all processors) is stored in a single HDF5 dataset whereas chunked means that all the processor-local patches of this array are stored as separate HDF5 datasets (called chunks). Consequently, for unchunked data all interprocessor ghostzones are excluded from the output. In contrast, for chunked data the interprocessor ghostzones are included in the output. When visualising chunked datasets, they probably need to be recombined for a global view on the data. This needs to be done within the visualisation tool.
Parameter¶
Variables to output in CarpetIOHDF5 file format. The variables must be given by their fully qualified variable or group name.
>>> IOHDF5::out_vars = "ADMBase::gxx"
Parallel (chunked) Output of Grid Variables or unchunked of Grid Variables.
>>> IO::out_mode = "onefile" >>> IO::out_unchunked = 1 [gxx.h5] >>> IO::out_mode = "proc" [gxx.file_0.h5] [gxx.file_1.h5] [gxx.file_2.h5] . . .
Do checkpointing with CarpetIOHDF5
>>> IOHDF5::checkpoint = "yes"
include:: CactusBase.rst
include:: Llama.rst
include:: CactusNumerical.rst
include:: EinsteinBase.rst
include:: EinsteinEOS.rst
include:: EinsteinEvolve.rst
include:: CactusConnect.rst