Although writing tests in a test suite is important, it is not the only factor that guarantees a successful test suite. It is necessary to find a workflow and an infrastructure that guarantees correctness. These should also open up the possibilities of discussion between different members of the project and the community, in order to reduce the chance of making mistakes while creating the tests.
One of the most challenging questions that validation and verification faces is: How can we guarantee that the test we are using is valid?. Although this non-trivial question does not have a single correct answer, we believe that the workflow we have been developing throughout our work is a good approach to guarantee that the tests we write and propose are as correct as possible.
Our current workflow can be seen in the following figure:
Our tests are divided per openMP directives and its clauses. For each directive we start by studying it and discussing it. We usually have presentations to guarantee that our understanding of the particular directive is correct among all the members of our team. After choosing a leader for a particular test covering a directive and/or clause, a tests is formulated, initially as a pseudo-code and then by an implementation in the target programming language. This tests is presented to the whole team and discussed in our weekly meetings. Comments and concerns are raised. If the tests do not satisfy the requirements, or if it is not valid, the tests must go back to design stage, otherwise we use the different openMP implementation available to us (e.g. GCC, CLANG, XLC, and CCE) to run it and obtain the compilation and runtime results.
Each tests can either PASS or FAIL. If it passes, no further action is performed during development. But if it fails, it is open to our community one more time for a final vote on the tests. If this vote is possitive, we add it to the master branch (current release branch) of our repository to make it available to our final V & V testing suite. If it does not pass this final vote, the comments and reason are gather and the tests goes back to development and the cycle starts again.
Sometimes testing of features is not possible given the available API and directives available to us. In such cases warnings might be raised, but tests will be reported as PASS. An example of this is the use of the #pragma omp parallel
. The number of threads can be any number between 1 and thread-limit-var. Having a number of threads of 1 does not imply that the parallel
directive is broken, since it is a valid value according to the specs. But that means that we cannot also determine if it is not working either. For this case, the tests will PASS and it will raise a warning during execution.
Let’s say that we want to create a tests for the target if
directive and clause. During the presentation of the clause we learn that the target
directive creates a target region of code that will be offloaded to the device if such offloading device is available. The leader for such tests comes up with the following tests.
int a = 10;
int b = 0, c = 0;
#pragma omp target if(a == 10) // true
{
b = omp_is_initial_device();
}
#pragma omp target if(b == 11) // false
{
c = omp_is_initial_device();
}
if (!b && c) {
printf("PASS")
} else {
printf("FAIL")
};
During the first discussion somebody noticed that, according to the specs, scalar values are not mapped to the device and the value of b and c would not contain the correct value. Corrections to the tests are performed accordingly
int a = 10;
int b = 0, c = 0;
#pragma omp target if(a == 10) defaultmap(tofrom: scalar) // true
{
b = omp_is_initial_device();
}
#pragma omp target if(b == 11) defaultmap(tofrom: scalar) // false
{
c = omp_is_initial_device();
}
if (!b && c) {
printf("PASS")
} else {
printf("FAIL")
};
Now the tests seems correct. The added defaultmap(tofrom: scalar)
clause allows scalar values to be mapped tofrom by default. Similar results could have been obtained with map(tofrom: b)
and map(tofrom: c)
, however it is decided that this approach is fine as it is.
The testing workflow continues. We run this tests using all the available implementations. Let’s say that this tests fails for a particular compiler. In such case a new bug is reported to that particular developers community.
The tests is now ready for final revision. It is discussed once more in the community, and then someone raises the following question: What happens if there are not available devices during runtime? In order to guarantee this tests passes we need to make sure that, without the if statement the code would indeed go to the first device. We add a new element to the tests and produce this result.
int a = 10;
int b = 0, c = 0, d = 0;
#pragma omp target defaultmap(tofrom: scalar) // target probe
{
d = omp_is_initial_device();
}
if (!d) {
#pragma omp target if(a == 10) defaultmap(tofrom: scalar) // true
{
c = omp_is_initial_device();
}
#pragma omp target if(b == 11) defaultmap(tofrom: scalar) // false
{
c = omp_is_initial_device();
}
if (!b && c) {
printf("PASS")
} else {
printf("FAIL")
};
} else {
OMPVV_WARNING("This test did not run on the device, hence it is not possible to determine the validity of the if clause");
printf("PASS");
}
Now, after another final review we are certain the tests is correct to the best of our knowledge. Tests are opensourced to make sure that other possible cases that scape our community are covered. Please feel free to contribute to this scrutiny process.