Collaborative Research: PPoSS: Planning: Cross-layer Coordination and Optimization for Scalable and Sparse Tensor Networks (CROSS)

This work aims to study sparsity in widely-used tensor networks by introducing constraints, regularization, dictionary, and/or domain knowledge for better data compression, faster computation, lower memory storage, along with better interpretability by: 1) Proposing memory hierarchy and microarchitecture-aware representations and effective yet efficient data (re-)arranging; 2) Designing memory hierarchy-aware and balanced algorithm with smart page arrangement; 3) Erasing the curse of dimensionality through memoization and intelligent data allocation; 4) Exploring specialized architecture on GPU and FPGA. We will accelerate six application scenarios by leveraging the scalable and highly optimized sparse tensor network on distributed heterogeneous systems.

Publications: Theses:
"This material is based upon work supported by the National Science Foundation under the grant number indicated by the NSF URL above."

"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation."