VSC School Projects

With the start of the VSC Research Center seven VSC School Projects have been awarded that comprise funding of PhD or PostDoc positions directed at the development of scientific codes directly at the user groups. Even if scientific software is of increasing importance to wide scientific communities, it is at present unfortunately close to impossible to finance software development from scientific grants. We hope to find support for the next round of VSC School students.

News posting (11.07.2014): VSC School projects awarded

News posting (05.03.2015): VSC School Kick-Off Meeting

The following PhD students and PostDocs have been selected for a VSC School financed project (for details see below):

Patrik Gunacker – Dynamical Vertex Approximation (DΓA)
Francesca Nerattini – Development of a protein design and protein folding package for HPC...
Felix Plasser – Parallelizing the Gradient and Overlap Calculation for MRCI Wavefunctions on HPC Clusters
Martina Prugger – High-resolution numerical schemes for hyperbolic conservation laws, and their... 
Thomas Ruh – Simulation of solids using WIEN2k
Karl Rupp – ViennaCL – Dense and Sparse Linear Algebra Library for Multi- and Many-Core Architectures
Andreas Singraber – Parallel software suite for neural network potentials for materials simulations

  • Patrik Gunacker in the group of Karsten Held, Institut für Festkörperphysik, TU Wien

    Dynamical Vertex Approximation (DΓA)

    This VSC School Project is concerned with optimizing our general purpose quantum Monte Carlo code for many-body systems which is designed to simulate both model systems and real materials having high requirements on computing resources. Several bottlenecks were removed and some bugs/misconceptions implemented in earlier phases have been fixed. In addition, the code has been greatly extended to calculate physical properties which were not at all accessible originally. The Monte Carlo sampling schemes were improved and among others a so-called worm sampling and improved estimator reduces the statistical noise. The asymptotic behavior of physical quantities was estimated which reduces the frequency region in which to calculate the otherwise too huge vertex tensor. Overall a speed-up of a factor 10 or more is estimated. We are presently working to release the code to the public. Apart from increasing the user-base of the code, our code will definitely increase Vienna's wordwide importance in terms of solid state physics codes.

    Publications of this Project on the VSC

    Related Projects on the VSC: 70585, 70386, 70385, 70177

  • Francesca Nerattini in the group of Ivan Coluzza, Computergestützte Physik, Universität Wien

    Development of a protein design and protein folding package for HPC:
    The Vienna Protein Simulator

    The Vienna Protein Simulator (ViPS) project long term objective is the realization of a Monte Carlo based simulation package for protein folding and protein design. The simulation package will offer several functionalities and in this project we intend to implement the modules necessary to design highly selective protein ligands towards cancer receptors. This requires the exploration of an extremely vast configurational space, and is therefore out of reach for current state-of-the art atomistic models. ViPS tackles these problems through the use of three complementary strategies: i) a coarse-grained protein model, the caterpillar, to reduce the dimension of the configurational space; ii) an advanced and scalable Monte Carlo scheme, the Virtual Move Parallel Tempering (VMPT) to enhance sampling; iii) large-scale parallelization. Although the inherent parallel nature of the VMPT scheme makes ViPS already extremely efficient on parallel clusters, serial sections of the code have been identified and optimized with the assistance of the VSC Support Team. Until now the core of the package has been written and tested in fundamental applications. The final step of the project is to apply ViPS to the design and folding of proteins optimized to bind to artificial and natural binding pockets and in doing so prove the scientific validity and usefulness of the package.

    Publications of this Project on the VSC

  • Felix Plasser in the group of Markus Oppel and Leticia González, Institut für Theoretische Chemie, Universität Wien

    Parallelizing the Gradient and Overlap Calculation for MRCI Wavefunctions on HPC Clusters

    During this VSC School Project a highly efficient and flexible code for the computation of the overlap between general many-electron wavefunctions was created. In typical benchmark calculations, this code is a factor 1000 faster than the previous state-of-the-art code. The newly developed code occupies a central role in the González research group and is used extensively for computations on the VSC (see related projects below). Its main application area is photodynamics simulations where it has been interfaced to the COLUMBUS, MOLCAS, ADF, and TURBOMOLE quantum chemistry program packages. As a second part of this project the parallel COLUMBUS code for large-scale parallel multireference configuration interaction (MRCI) computations was ported to the VSC-3 architecture, thus making computations on many interesting systems possible that are not accessible by any other quantum chemistry code.

    VSC School Seminar: Molecular Photodynamics Simulations on HPC Systems

    Publications of this Project on the VSC

    Related Projects on the VSC: 70264, 70566, 70570, 70719, 70800, 70877, 70880, 70953

  • Martina Prugger in the group of Alexander Ostermann, Institut für Mathematik, Universität Innsbruck

    High-resolution numerical schemes for hyperbolic conservation laws, and their performance on modern HPC architectures

    The performance of the Partitioned Global Address Space (PGAS) programming model as realized in Unified Parallel C (UPC) was tested and compared to a classical Message Passing Interface (MPI) implementation of the same prototypical scientific code. As described in a recently published paper the main advantage of UPC with its possibility to do an incremental parallelization is that it is significantly easier to develop in as compared to MPI. Therefore, UPC seems to be a viable option for scientific computing, even if on some HPC systems there are still a few issues to be tackled when installing and using UPC since native support for all types of modern communication hardware is not yet provided. Depending on the communication pattern, the performance that can be achieved when going through all the optimization stages of UPC is comparable to MPI in most situations. Presently, a performance model for memory bound computing problems, that allows to predict if UPC is suitable for a given task, is being developed.

    VSC School Seminar: A short introduction to the PGAS programming paradigm

    Publications of this Project on the VSC

  • Thomas Ruh in the group of Peter Blaha, Institut für Materialchemie, TU Wien

    Simulation of solids using WIEN2k

    The parallel performance of the widely used program package Opens external link in new windowWien2k, that is among the most accurate schemes for doing electronic structure calculations of solids using density functional theory (DFT), was benchmarked and improved in several ways. A problem with the automatic choice of the blocksize for the matrix distribution was solved and new and more efficient library functions were introduced for several matrix operations that result in a remarkable speed-up. In addition, a new option for explicit process pinning was added to the program calls as it turned out that pinning is indeed important to get consistent and optimized run-times. Currently, also the introduction of GPU support into the code is being analyzed. All these improvements are very valuable not only for the research done by our group but they also contribute to the efficient use of available HPC resources as Wien2k is used quite a lot by many research groups on the VSC clusters and wordwide.

    Publications of this Project on the VSC

    Related Projects on the VSC: 70296, 70022

  • Karl Rupp in the group of Karl Rupp, Florian Rudolf, and Josef Weinbub, Institut für Mikroelektronik, TU Wien

    ViennaCL – Dense and Sparse Linear Algebra Library for Multi- and Many-Core Architectures

    Aiming to enhance the performance of the ViennaCL library, algebraic multigrid (AMG) preconditioners have been fully ported to multi- and many-core architectures using the programming models CUDA, OpenCL, and OpenMP. Furthermore, relevant kernels in the AMG implementation have been tuned for performance: Sparse matrix-vector products and sparse matrix-matrix products are now on-par or even outperform implementations in vendor-tuned libraries such as Intel MKL or NVIDIA cuSparse. In addition ViennaCL's capabilities on shared-memory systems have been extended to distributed-memory systems through the PETSc library including not only a fully flexible backend selection (CUDA, OpenCL, or OpenMP) but also advanced preconditioners. Current ongoing efforts focus on improving the inter-process communication of ViennaCL objects in PETSc and to allow to select different ViennaCL backends in PETSc for different MPI ranks which will enable full machine utilization for hybrid clusters equipped with both CPUs and GPUs. Upcoming activities include the incorporation of new features such as 64-bit atomics on the new NVIDIA Pascal GPU as well as AVX512 vector extensions on Knights Landing into ViennaCL's kernels providing tuned code for a next generation cluster.

    VSC School Seminar: Tutorial on Node-Level Performance Engineering: Hardware and Software Aspects

    Opens external link in new windowVSC School Seminar: PETSc Tutorial Day & PETSc User Meeting 2016, June 28-30, 2016

    Publications of this Project on the VSC

    Related Projects on the VSC: 70977, 70748, 70388

  • Andreas Singraber in the group of Christoph Dellago, Computergestützte Physik, Universität Wien

    Parallel software suite for neural network potentials for materials simulations

    This VSC School Project is focused on the development of a software package for neural network potentials for molecular dynamics simulations. When properly trained, such potentials deliver the accuracy of ab-initio calculations at a fraction of their cost. In this project the neural network approach was implemented within LAMMPS, enabling users to apply this efficient method in large-scale materials simulations on HPC systems. Furthermore, the training code for the neural networks was parallelized by adapting the multi-stream Kalman filter method. The program design incorporates the concept of distributed memory, i.e., the potentially large training data set is shared among different nodes, and leads to a considerable speed-up of the calculation. Using the software developed in this project we have demonstrated the importance of van der Waals interactions in liquid water and have, for the first time, determined the melting point of ice from first principles.

    Publications of this Project on the VSC

    Related Projects on the VSC: 70753