Title: Scaling the Memory Wall through Software-Defined Cache Hierarchies and Object-Based Memory Hierarchies Abstract: Memory accesses often limit the performance and efficiency of current systems, and the trend towards highly-parallel systems and specialized cores is placing mounting pressure on the memory hierarchy. Ideally, the memory system should be managed to approach the performance of application-specific memory hierarchies that hold working sets at minimum latency and energy. However, conventional systems are far from this ideal: they instead implement rigid hierarchies of increasingly larger and slower caches, fixed at design time and optimized for the average case. To scale the memory hierarchy, future computing systems need more efficient ways to leverage memory resources than a rigid hierarchy. In this talk, I will first present Jenga, a software-defined cache hierarchy that dynamically and transparently specializes itself to applications. Jenga treats cache banks as a resource pool, out of which the best hierarchy is then built for each application at runtime. Jenga hardware exposes memory resources to software, allowing it to define, at runtime, the configuration of each application's cache hierarchy via novel performance models and optimization algorithms. It also provides hardware support to monitor applications and mechanisms to reduce the costs of reconfigurations. Specializing the memory system to each application maximizes performance and energy efficiency, since applications can avoid using resources that they do not benefit from, and configure the remaining resources to hold their data at minimum latency and energy. As a result, Jenga improves full-system energy-delay-product (EDP) by up to 85% over a combination of state-of-the-art techniques. I will also present Hotpads, a new memory hierarchy designed from the ground up to expose an object-based memory model. Hotpads hides the memory layout and takes control over it, dispensing with the conventional flat address space abstraction. The Hotpads ISA prevents programs from reading or manipulating raw pointers, enabling Hotpads hardware to rewrite them under the covers. Hotpads is a hierarchy of directly-addressed memories that store and transfer objects implicitly and compactly, using on-chip capacity more efficiently than caches. By adopting an object-based interface, Hotpads delivers substantial gains over conventional designs and enables new capabilities such as architectural support for object allocation and recycling. Moreover, Hotpads unlocks many new optimizations and opens up research avenues for object-based memory management, such as compression, prefetching, and on-the-fly memory layout optimizations. Bio: Po-An Tsai is a Ph.D. candidate at MIT, where he is advised by Prof. Daniel Sanchez. Po-An's research focuses on redesigning the memory hierarchy to exploit both static and dynamic application information to reduce data movement in computer systems. He received his S.M. in EECS and the Jacobs Presidential Fellowship from MIT and his B.S. in EE from National Taiwan University.