Until now all tests created their own temp directory. If running from ./main this is different now. The main script will create a tempdir and pass that as (hidden) command line option to the test scripts - each will pick it up and create a subdir for its experiment in there. If a test is done and finishes succesfully it removes its own working directory. If, after all tests have run the temp directory is empty, it means that all tests have finished succesfully and the temp directory is removed
|6 months ago|
|README.md||8 months ago|
|__init__.py||8 months ago|
|test_ES085A.py||6 months ago|
Test numerical reproduceability of j2ms2 and tConvert
This test contains three raw datasets:
- raw SFXC output files from job 24427 (
24427/*.cor), from experiment ES085A. This data was chosen since it has a strong maser line, easily observable off center in subband 3.
- known-good MeasurementSet produced from that:
- known-good FITS-IDI file produced from
And two types of support files:
- VEX file(s) for the experiment (
es085a.vix), who are identical/one is symlink of the other; the
j2ms2tool expects a VEX file name of the name of the directory it is run in under certain circumstances.
lisfile, used to control the input to j2ms2 to exactly (re)produce a given MeasurementSet
This ~1 GB of data is stored on the JIVE archive and will be automatically downloaded the first time this regression test is run.
Then, starting from raw correlator output, VEX file and "lis" file four data sets are created in a temporary location:
j2ms2is run with lisfile; lisfile incorporates all
*.corfiles from the job, producing MeasurementSet
j2ms2is run "manually" by specifying
-o es085a.ms job/*.coron the command line; producing MeasurementSet
- both MeasurementSets are converted to FITS-IDI using
The following tests are performed
tConvertshould run succesfully - i.e. terminate with exit code 0
Verify that "all data" from the correlator files is written to the MeasurementSets and FITS-IDI files. The jive-toolchain-verify tool compare-ms-idi.py is used to compute a six-way diff between the two "gold" datasets and the four newly created ones. The tool collects integrated weight and integration time per baseline per source, but does not look at the data content itself. Any differences reported are due to the toolchain-under-test skipping (or duplicating) data in either of the data formats; the diff output will tell what happened in which format(s).
Verify that all meta data (antenna, source, frequency setup properties) are correctly propagated. The jive-toolchain-verify tool compare-ms-idi-meta.py tool is used to compute a six-way diff between the meta data from the two "gold" datasets and the four data files generated by the toolchain under test. Any differences reported should be due to the new toolchain writing different meta data to (one of) the other formats.
Verification of numerical equality of the data written to the MeasurementSet(s): A batch script in combination of a postprocessing module for the jiveplot tool is run. The jiveplot tool is used to:
- select three time ranges (
scan 18 19 33of ES085A)
- compute amplitude+phase (y-axes) as function of frequency (x-axis)
- those are integrated (summed) for each baseline/subband/polarization/source/scan
These values are extracted from two MeasurementSets: gold and toolchain-under-test produced MeasurementSet.
A special-purpose written postprocessing module for jiveplot compares the datasets in two successive plot actions. It keeps the last plot data set and if a new dataset is plotted, it computes the difference between the x- and y-values found in all data sets and accumulates those differences separately for x and y. If any of the accumulated differences exceeds the tolerance (currently
1e-7) the postprocessing tool prints a warning.
- select three time ranges (
Extract the same data from FITS-IDI and compare for numerical equivalence