Skip to content
This repository has been archived by the owner on Sep 27, 2023. It is now read-only.

Validation guide

David P. Chassin edited this page Mar 8, 2022 · 2 revisions

The validation guide is intended for GridLAB-D developers of validation test models.

Autotesting

GridLAB-D employs an integrated automatic testing procedure that can be dispatched using the --validate command line option. The validation process scans the current folder and all subfolders for folders named autotest. Within each autotest/ folder, it searches for files name test_*.glm, creates a folder using the basename of the GLM file, copies the GLM file into the folder, and starts a GridLAB-D simulation. During the simulation all output is collected in redirection files named gridlabd.*.

Depending on the name of the model, the validation process will consider different outcomes. If the basename includes the pattern _err, then the test is expected to fail with a non-zero exit code. If it does not fail, that is considered a validation failure. If the basename includes the pattern _exc, then the test is expected to result in an exception. If the basename includes the pattern _opt, then the result of the test is ignored. Otherwise, the test is expected to succeed.

When all the autotests found have completed, a summary of the validation process is displayed.

Parallelism

Validation can be run using multiple processor by using the --threadcount or -T command line option. For example, use the command gridlabd -T 2 --validate to use two processors while running validation. If you want to use all the available processors use -T 0. You cannot specify more parallelism that you have available processors.

You can track the progress of validation using the --pcontrol or --pstatus command line options.

Note that because parallel tests may access files concurrently, care should be taken not to generate outputs to folders that are shared by the validation process such as /tmp.

Assertions

Most tests should implement at least one assert object to verify the outcome of the test. Alternative, an #on_exit script, a script on_term, or a python on_term event handler can be used to verify the outcome of a test.