research

We are grateful to NVIDIA Corporation for the donation of NVIDIA GPU such as Tesla K40 and a GTX Titan X (Maxwell) GPU cards among other cards.

NSF SHF: PAW: Novel Functionality in Programming Models to Productively Abstract Wavefront Parallel Pattern

The goal of this project to enable high performance, memory-efficient, portable and productive software framework for parallelizing complex parallel patterns such as ‘wavefronts’, commonly found in large scientific applications such as neutron radiation transport, bioinformatics and atmospheric science.

This work is supported by National Science Foundation (NSF).

NSF: Measuring Real World Application Performance on Next-Generation Computing Systems

The goal of this project is to create a new real-world application benchmark suite jointly with SPEC/HPG and develop performance metrics suitable for application benchmarks.

This work is supported by National Science Foundation (NSF). In collaboration with Indiana University.

Nemours/Alfred I. duPont Hospital for Children: Big Data Analytics and Machine Learning

We will develop predictive models for predicting relapse of pediatric oncology patients by utilizing personalized genomic sequencing data and develop predictive models for medical outcomes based on electronic healthcare record (EHR) data.

This work is funded by Nemours/Alfred I. duPont Hospital for Children.

Developing a portable and performance-efficient DNA sequence alignment tool

We are building a project, called AccSequencer, which utilize the power of directive-based models such as OpenMP and OpenACC to accomplish fast alignment for thousands of gene queries over the human gnome in relative short period of time. We plan to use directive-based models instead of low-level proprietary language such as CUDA to not only be able to reduce the steep learning curve and also be able to target multiple platforms.

This is work in collaboration with Nemours/Alfred I. duPont Hospital for Children.

NCAR: GPU Acceleration of the MURaM Solar Physics Model

The MURaM (Max Planck University of Chicago Radiative MHD) code is the primary solar model used in HAO for simulations of the upper convection zone, photosphere (visible surface of the sun) and corona. Originally based on a magnetohydrodynamics (MHD) module from the University of Chicago, MURaM is jointly developed and used by HAO, the Max Planck Institute for Solar System Research (MPS) and the Lockheed Martin Solar and Astrophysics Laboratory (LMSAL). Applying an accelerated MURaM model of the coupled photosphere/corona system to solar eruptive events will be instrumental in improving the inner boundary condition of heliospheric simulations of space weather - this is the focus of this project.

This project is funded by National Center for Atmospheric Research (NCAR).

DOE ECP SOLLVE: Creating a validation and verification testsuite for OpenMP 4.5 offloading features

ECP SOLLVE project primarily aims at scaling OpenMP by leveraging LLVM for exascale performance and portability of applications. The validation and verification (V&V) suite is critical to have a mechanism that tests for an implementation’s conformance to the OpenMP standard, as well as, ambiguities in the OpenMP specification. Currently our tests focus on the new features introduced in OpenMP 44.5 for offloading computations to devices as well as related use-cases based on kernels extracted from production DOE applications. This helps application developers understand individual OpenMP features independent of other application artifacts. Going forward, we also plan to interact with standard benchmarking bodies like SPEC/HPG to donate key ECP OpenMP benchmarks or mini-apps for potential inclusion in the next release versions of SPEC OMP and SPEC ACCEL. (Check out our project website)

Subcontract funded project as part of the SOLLVE Exascale Computing Project (ECP). In collaboration with Oak Ridge National Lab and Argonne National Lab.

OpenACC/NVIDIA: Creating a validation and verification testsuite for OpenACC 2.x features

This project builds a Validation and Verification Testsuite to validate and verify the implementations of OpenACC features in conformance with the specification. The validation suite also provides a tool to compiler developers as a standard for the compiler to be tested against and to users and compiler developers alike in clarifying the OpenACC specification. This testsuite is being used in production and has been integrated into the harness infrastructure of the world’s fastest supercomputer Summit system at the Oak Ridge National Lab.

This is work in collaboration with Oak Ridge National Laboratory (ORNL), PGI, Mentor Graphics and OpenACC.

Parallelizing chemical shift with portable programming model

Nuclear magnetic resonance (NMR) is a practice long-cherished in the fields of biochemistry, biophysics and structural biology. Chemical shift, such as PPM_ONE, the principle observable in NMR instrumentation, provides valuable insight into protein secondary structure by allowing inferences about conformation to be drawn based on peak shift, in units of ppm. The utility of chemical shift in structure elucidation, however, is not limited to NMR based experimentation. NMR-inspired software solutions have materialized into a rich domain in computational chemistry. Commonly, these programs employ perusal of NMR databases to create relationships between sequence and structure. Our work focusses on parallelizing and accelerating this chemical shift problem on GPUs using OpenACC.

This is work in collaboation with Prof. Juan Perilla from the Department of Chemistry, UDEL.

Scalable graph analytics and machine learning on distributed, heterogeneous systems

We are leveraging distributed programming frameworks (such as Apache Spark) and high-level accelerator frameworks and libraries (such as OpenACC, OpenMP, PyOpenCL, etc) to bridge the gap between Big Data and HPC. We are applying our techniques to graph analytics and machine learning codes to demonstrate scalable performance on real-world applications. Our goal is to develop techniques to allow programmers to achieve scalable performance on distributed, heterogeneous systems using high-level languages and libraries.

This is work in collaboraiton with Prof. Michela Taufer and her research group in the Dept. of CIS, UDEL.