and do cleanup after ourselves whilst making sure the exit code of the
"main" script reflects the state of the tests.
Only successful return code is after all tests have run to completion
succesfully.
Until now all tests created their own temp directory.
If running from ./main this is different now.
The main script will create a tempdir and pass that as (hidden) command line
option to the test scripts - each will pick it up and create a subdir for
its experiment in there.
If a test is done and finishes succesfully it removes its own working
directory.
If, after all tests have run the temp directory is empty, it means that all
tests have finished succesfully and the temp directory is removed
The offical download location for regression data has been set:
http://archive.jive.nl/regression/...
The .tar.bz for ES085A contains the raw sfxc data, VEX files, a lis file and
known-good MeasurementSet and FITS-IDI file and a known-bad MeasurementSet.
So the aux files (vix/lis) can be deleted from the repository
The (binary) test data:
raw sfxc output, known-good MS, known-good IDI file, support files (*.vix, *.lis)
should not live in the code repository. They're now in a bzip2'ed tarfile
that will be downloadable from `archive.jive.nl`.
On test execution does a simple check to see if the data has already been
downloaded and if not, downloads & inflates & extracts in the appropriate
place.
The data selection in python/compare_MS_numerically.py was hardcoded for
ES085A - but the script should work for any pair of measurementsets.
So the data selection was added as command line argument.
In test_ES085A.py the data selection for ES085 was added to the script
invocation.
jive-toolchain-verify has two tools: compare-ms-idi.py and
compare-ms-idi-meta.py to compare MS/FITS-IDI files for (meta) data content
and indicate differences.
These tests are now run on the known-good MS/FITS-IDI and the freshly
generated ones to veriy that the (meta) data content remains constant
Log jplotter sequence of commands in logfile
Add new dependencies to README.md
the test suite depends on several external tools, they're now listed, and
documented some more command line options
So even after a fresh git clone the testing framework should
auto-init/update and "just run" ...
After a fresh git clone there won't be any binary correlator data yet but
the regression does run w/o blocking.
As errors occur, the artefacts (mostly logfiles only) will be left in /tmp
Added command line option
--jplotter /path/to/jplotter (default: taken from environment)
and wrote a jplotter postprocessing plugin that does sys.exit(-1) if the
accumulated difference between the last to plots > tolerance (currently: 1e-7)
The jplotter is driven from a toplevel python script that:
- tests if both measurementsets to compare are measurement sets
(jplotter is too forgiving upon that kind of errors)
- plays nasty trick with sys.excepthook and sys.exit functions:
replaces both before the call to jplotter.run_plotter(...)
to catch execptions and sys.exit(!=0) and translate into
error being set.
- exits with 0 if the two measurement sets compare numerically equal as
defined by the compare_ms_data.py:compare_ms_data() function
The reasons for that last bit are twofold:
- a temporary dir+plotfile are created to do the batchplotting into
which require cleanup in all cases of exit (succes or #fail)
- we must be able to propagate the succes#fail state of the comparison
to our caller