A suite of regression tests to help automate validation of new builds/features in j2ms2/tConvert/other tools
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

3.9 KiB

Test polarization labelling correctness of j2ms2 and tConvert

This test contains three raw datasets:

  • raw SFXC output files from clocksearch job 31334 (31334/*.cor), from experiment N20L2. This data was chosen since the baselines to Arecibo (the 305m) are so sensitive that the four polarization products have different, discernable-by-eye, amplitudes, see this amplitude-vs-frequency plot
  • known-good MeasurementSet produced from that: clk_No0002_31334.ms
  • known-good FITS-IDI file produced from clk_No0002_31334.ms: CLK_NO0002_31334.IDI

And two types of support files:

  • VEX file(s) for the experiment (N20L2.vix and n20l2.vix), who are identical; the j2ms2 tool expects a VEX file name of the name of the directory it is run in under certain circumstances.

This ~200 MB of binary data is stored on the JIVE archive and will be automatically downloaded the first time this regression test is run.

Then, starting from raw correlator output, VEX file two data sets are created in a temporary location:

  • j2ms2 is run "manually" by specifying -o clk_No0002_31334.ms 31334/*.cor on the command line; producing MeasurementSet clk_No0002_31334.ms
  • the MeasurementSet is converted to FITS-IDI using tConvert:
    • clk_No0002_31334.ms => CLK_NO0002_31334.IDI

The following tests are performed

  • j2ms2 and tConvert should run succesfully - i.e. terminate with exit code 0

  • Verify that "all data" from the correlator files is written to the MeasurementSets and FITS-IDI files. The jive-toolchain-verify tool compare-ms-idi.py is used to compute a six-way diff between the two "gold" datasets and the four newly created ones. The tool collects integrated weight and integration time per baseline per source, but does not look at the data content itself. Any differences reported are due to the toolchain-under-test skipping (or duplicating) data in either of the data formats; the diff output will tell what happened in which format(s).

  • Verify that all meta data (antenna, source, frequency setup properties) are correctly propagated. The jive-toolchain-verify tool compare-ms-idi-meta.py tool is used to compute a six-way diff between the meta data from the two "gold" datasets and the four data files generated by the toolchain under test. Any differences reported should be due to the new toolchain writing different meta data to (one of) the other formats.

  • Verification of numerical equality of the data written to the MeasurementSet(s): A batch script in combination of a postprocessing module for the jiveplot tool is run. The jiveplot tool is used to:

    • select baselines to Ar
    • compute amplitude+phase (y-axes) as function of frequency (x-axis)
    • those are integrated (summed) for each baseline/subband/polarization/source/scan

    These values are extracted from two MeasurementSets: gold and toolchain-under-test produced MeasurementSet.

    A special-purpose written postprocessing module for jiveplot compares the datasets in two successive plot actions. It keeps the last plot data set and if a new dataset is plotted, it computes the difference between the x- and y-values found in all data sets and accumulates those differences separately for x and y. If any of the accumulated differences exceeds the tolerance (currently 1e-7) the postprocessing tool prints a warning.


Extract the same data from FITS-IDI and compare for numerical equivalence