Creating Reports

After running the tests, it is equally important to be able to analyse the results. Our plan is to be able to support as many output formats as possible, to allow users to easily report their findings, and find solutions either in the system configuration or vendor bugs. Currently, there are three different formats we support. Raw format, JSON format file, and a prettified HTML result report.

Preliminary requirements

Before reporting results, it is necessary to enable log acquisition in the tests run. To do this, please use the LOG and LOG_ALL options as explained in the makefile section. Here is an example on how to do it:

make VERBOSE=1 VERBOSE_TESTS=1 LOG=1 LOG_ALL=1 CC=gcc CXX=g++ all

Raw format

Running the results this way will result in the creation of a folder called logs, which contains a single file per tests run. Each tests can be identified thanks to the naming convention, as described in the repository section of this site.

Each file is made out of segments corresponding to a single operation (compile or run). Each segment are surrounded by a header and a footer that provide extra information regarding the operation in that particular log segment.

HEADER
OUTPUT
FOOTER

The output itself will contain all that is written to the I/O during that particular operation. We have tried to standarize tests outputs, but this is still work in progress. This is done through the common header file that we use in all the tests. Adaption of all the tests to the header file is a work in progress.

On the other hand, the header and the footer have an standard form that allows parsing the logfile easily, as well as obtaining extra information regarding the particular operation. Such format is as follows:

HEADER:
*-*-*BEGIN*-*-*COMPILE (command)/RUN*-*-*DATE (long format) *-*-*SYSTEM NAME*-*-*SOURCE FILE TESTS*-*-*COMPILER VERSION/RUNTIME COMMENTS*-*-*GIT COMMIT*-*-*

FOOTER:
*-*-*END*-*-*COMPILE (command)/RUN*-*-*DATE (long format)*-*-*SYSTEM NAME*-*-*PASS/FAIL*-*-*COMMENTS*-*-*GIT COMMIT*-*-*

Here is an example of the complete log raw format output of a tests:

*-*-*BEGIN*-*-*COMPILE CC=gcc -I./ompvv -O3 -std=c99 -fopenmp -foffload=-lm -lm *-*-*Wed May  9 19:15:47 EDT 2018*-*-**-*-*/home/josem/Documents/Sunita/Projects/SOLLVE/sollve_vv/tests/application_kernels/mmm_target.c*-*-*gcc version unknown*-*-*04e92c8*-*-*
*-*-*END*-*-*COMPILE CC=gcc -I./ompvv -O3 -std=c99 -fopenmp -foffload=-lm -lm *-*-*Wed May  9 19:15:47 EDT 2018*-*-**-*-*PASS*-*-*none*-*-*04e92c8*-*-*

*-*-*BEGIN*-*-*RUN*-*-*Wed May  9 19:16:06 EDT 2018*-*-**-*-*bin/mmm_target.c*-*-*none*-*-**-*-*04e92c8*-*-*
 
 running: bin/mmm_target.c.run 
mmm_target.c.o: PASS. exit code: 0
mmm_target.c.o:
Total time for A[500][500] X B[500][500] on device using target directive only:37 
Test PASSED.
*-*-*END*-*-*RUN*-*-*Wed May  9 19:16:43 EDT 2018*-*-**-*-*PASS*-*-*none*-*-*04e92c8*-*-*

REPORT SUMMARY

This option allows to get a quick glance over the results and prints the list of tests that failed, as well as the reason of failure (i.e. runtime or compilation time). This is useful to obtain a quick glance to what happened during the execution. To obtain this just use the following make rule

make report_summary

An example of this summary is the following

make report_summary
"Including generic.def file"
FAILED
Checked 97 runs
Reported errors(8):
  test_target_data_use_device_ptr.c on gcc version unknown (compiler) 
  test_target_enter_exit_data_classes.cpp on g++ version unknown (compiler) 
  test_target_map_classes_default.cpp on g++ version unknown (runtime) 
  test_target_teams_distribute_collapse.c on gcc version unknown (runtime) 
  test_target_teams_distribute_nowait.c on gcc version unknown (runtime) 
  test_target_teams_distribute_parallel_for_firstprivate.c on gcc version unknown (runtime) 
  test_target_teams_distribute_reduction_and.c on gcc version unknown (compiler) 
  test_target_teams_distribute_reduction_or.c on gcc version unknown (compiler) 

JSON FORMAT

The raw format can be an easy way to keep the output logs, and to access the history of a recently ran tests. However, it is not ideal if the user wants to write a script to post process the results. For this reason, we provide a JSON version of the logs that compile all the logfiles into a single JSON file that can be later used in any post processing technique.

To obtain the JSON format, you can use the following rule after having run the tests to obtain a raw format log files

make report_json

A segment of the resulting JSON file will look something like this

 [
     ...,
  {
    "Binary path": "bin/mmm_target.c",
    "Compiler command": "gcc -I./ompvv -O3 -std=c99 -fopenmp -foffload=-lm -lm ",
    "Compiler ending date": "Wed May  9 19:16:43 EDT 2018",
    "Compiler name": "gcc version unknown",
    "Compiler output": "",
    "Compiler result": "PASS",
    "Compiler starting date": "Wed May  9 19:15:47 EDT 2018",
    "Runtime ending date": "Wed May  9 19:16:43 EDT 2018",
    "Runtime only": false,
    "Runtime output": "\u001b[0;32m \n\n running: bin/mmm_target.c.run \u001b[0m\nmmm_target.c.o: PASS. exit code: 0\n\u001b[0;31mmmm_target.c.o:\nTotal time for A[500][500] X B[500][500] on device using target directive only:37 \nTest PASSED.\u001b[0m\n",
    "Runtime result": "PASS",
    "Runtime starting date": "Wed May  9 19:16:06 EDT 2018",
    "Test comments": "none\n",
    "Test name": "mmm_target.c",
    "Test path": "/home/josem/Documents/Sunita/Projects/SOLLVE/sollve_vv/tests/application_kernels/mmm_target.c",
    "Test system": "",
    "Test gitCommit": "04e92c8"
  },...
]

Notice that, due to the output coloring in our makefile, there are some characters in the output that contain linux shell color formating. e.g. \u001b[0;32m.

CSV FORMAT

Another option that may be useful for offlone analysis of the data is the CSV report. This report creates a table with all the results’ information, where the columns are those fields in the JSON format result.

To obtain the CSV format, you can use the following rule after having run the tests to obtain a raw format log files

make report_csv

This will generate a results.csv file that will look something like this

testSystem, testName, testPath, compilerName,compilerCommand, startingCompilerDate,endingCompilerDate, compilerPass, compilerOutput,runtimeOnly, binaryPath, startingRuntimeDate,endingRuntimeDate, runtimePass, runtimeOutput, gitCommit, testComments 
"fatnode", "offloading_success.c", "tests/4.5/offloading_success.c", "gcc version unknown", "gcc -I./ompvv -O3 -std=c99 -fopenmp -foffload=-lm -lm", "Mon Dec  2 19:09:35 EST 2019", "Mon Dec  2 19:09:36 EST 2019", "PASS", "", "False", "bin/offloading_success.c", "Mon Dec  2 19:09:36 EST 2019", "Mon Dec  2 19:09:37 EST 2019", "PASS", "^[[0;32m  running: bin/offloading_success.c.run ^[[0moffloading_success.c.o: PASS. exit code: 0^[[0;31moffloading_success.c.o:Target region executed on the device^[[0m", "04e92c8", "none"

It is possible to import CSV to Microsoft excel or similar software

HTML Prettified report

Another option that is more user friendly is to create a results report table that looks exactly like the one used in this website in the results section. To do this, it is necessary to have previously ran the tests with the log options enable to obtain the raw formating logfile. After obtianing the raw formated results, you can generate the HTML report using the following rule

make report_html

This rule will make use of the JSON file, and a previously desing HTML template, to generate a new folder called results_report, which contains the following file structure:

> tree results_report/
results_report/
├── css
│   ├── results.css
│   └── third_party
│       └── bootstrap.min.css
├── img
│   └── favicon2.png
├── js
│   ├── results.js
│   └── third_party
│       ├── angular-animate.min.js
│       ├── angular.min.js
│       ├── angular-route.min.js
│       ├── angular-sanitize.min.js
│       ├── angular-touch.min.js
│       ├── angular-ui.min.js
│       ├── ansi2html.js
│       ├── bootstrap.min.js
│       ├── jquery-2.x.min.js
│       ├── moment.min.js
│       └── ui-bootstrap-tpls.min.js
├── results.html
└── results.json
└── results.csv


5 directories, 18 files

You can use any web browser that is recent enough to support javascript and CSS. Open the results.html file with such browser, and you will be able to obtain a list of all the tests, and a filter box on top of the results. This filter box allows the user to chose compilers, systems, PASS/FAIL tests, or search tests by name.

Here is an example of how such report should look like this:

TAT image

Online report (Beta)

This report is still in beta version. Please report any errors you encounter using this report

While HTML reports are a good option for local analisys of data, it may be inconvinient if you are working on a remote system. Instead of using the local html_report, it is possible to upload your result to our website, and obtain a link to visualize it. Furtheremore, it is possible to continue adding results (even from multiple systems) to the same report, as long as you specify the same tag.

There are two make variables that you need to consider:

Make variable Description
REPORT_ONLINE_TAG It may be used to specify a previously generated result tag. If a report with this tag exists, the content of this report will be replace by the new results, unles REPORT_ONLINE_APPEND is used
REPORT_ONLINE_APPEND It requires a REPORT_ONLINE_TAG and it allows for appending results to the already existing report.

If none of these variables are specify, a new tag will be creted.

IMPORTANT: Do not rely on our website to store the report. We enforce a short data retention policy that will remove results that are older than a certain time TBD. If you want to retain the log data, please use the local report_html option, or keep the log files to be able re-generate the online report.

There are some software requirements for this report:

REQUIREMENTS
Python > 3
Either Curl or the “requests” python package

We recommend using “requests” as this has better error handling that our curl back end script. To obtain requests use:

pip install requests

To generate an online report you can use the following make rule

make report_online

Here is an example:

> make report_online
"Including generic.def file"
Creating results.json file
Currently we only support run logs that contain compilation and run outputs. Use the 'make all' rule to obtain these
 === SUBMITTING ONLINE REPORT === 
We are using CURL because we could not find the `requests` package
Error handling is limted. Please consider installing `requests` through
    pip install requests
 Your report tag is 402796c6e. Do not lose this number
 Visit your report at:
    https://crpl.cis.udel.edu/ompvvsollve/result_report/results.html?result_report=402796c6e
 This tool is for visualization purposes. 
 Our data retention policy is of 1 month. 
 After this time, we do not guarantee this link will work anymore
 === SUBMISSION DONE === 

If you want to append values to this report use:

> make REPORT_ONLINE_TAG=402796c6e REPORT_ONLINE_APPEND=1 report_online
"Including generic.def file"
 === SUBMITTING ONLINE REPORT === 
We are using CURL because we could not find the `requests` package
Error handling is limted. Please consider installing `requests` through
    pip install requests
 Your report tag is 402796c6e. Do not lose this number
 Visit your report at:
    https://crpl.cis.udel.edu/ompvvsollve/result_report/results.html?result_report=402796c6e
 This tool is for visualization purposes. 
 Our data retention policy is of 1 month. 
 After this time, we do not guarantee this link will work anymore
 === SUBMISSION DONE ===