Abstract: Multicore architectures, which have multiple processing units on a single chip, have been adopted by most chip manufacturers. Most such chips contain on-chip caches that are shared by some or all of the cores on the chip. To efficiently use the available processing resources on such platforms, scheduling methods must be aware of these caches. In this talk, I will present a method for improving cache performance when scheduling real-time workloads. Additionally, I will discuss our ongoing work on methods to dynamically profile the cache behavior of real-time tasks, which allows our scheduling method to be effectively employed. These scheduling and profiling methods are especially applicable when multiple multithreaded real-time applications exist with large working sets. As this could easily be the case for a multimedia server, we also present a preliminary case study that shows how our best-performing heuristics can improve the end-user performance of video encoding applications --- we plan to expand this study in future work. Bio: John Calandrino is a Ph.D. graduate student in Computer Science at the University of North Carolina at Chapel Hill. He received a B.S. in Computer Science from the University of Virginia in 2002, and an M.S. in Computer Science from Cornell University in 2004. His research interests include designing cache-aware real-time scheduling policies for multicore platforms, investigating the (scalability) impact of multicore platforms on operating system schedulers, and determining what architectural features can be useful when designing real-time and non-real-time schedulers for upcoming multicore platforms. He plans to graduate in the Summer of 2009.