SparCity aims at creating a supercomputing framework to provide efficient algorithms and coherent tools for maximizing the performance and energy efficiency of sparse computations on emerging HPC systems, while also opening up new usage areas for sparse computations in data analytics and deep learning.
The goal of the TDLPP workshop is to provide a venue for developers and users of tools that address the important topic of memory access optimization. While hardware continues to evolve and high-bandwidth memory becomes available in accelerators and mainstream CPUs, the gap between compute capability (in terms of arithmetic operations per second) and the speed of memory (in terms of access latency or amount of bytes transferred) continues to widen. Tools are thus needed to help developers understand the behavior of their codes to support them with optimizing and modeling their applications. This is especially true in application areas that involve sparse matrices, tensors, or graphs.