Department of Information Technology
Uppsala Architecture Research Team

Modeling Performance Variation Due to Cache Sharing

Shared cache contention can cause significant variability in the performance of co-running applications from run to run. This variability arises from different overlappings of the applications' phases, which can be the result of offsets in application start times or other delays in the system. Understanding this variability is important for generating an accurate view of the expected impact of cache contention. However, variability effects are typically ignored due to the high overhead of modeling or simulating the many executions needed to expose them.

This paper introduces a method for efficiently investigating the performance variability due to cache contention. Our method relies on input data captured from native execution of applications running in isolation and a fast, phase-aware, cache sharing performance model. This allows us to assess the performance interactions and bandwidth demands of co-running applications by quickly evaluating hundreds of over-lappings.

We evaluate our method on a contemporary multicore ma- chine and show that performance and bandwidth demands can vary significantly across runs of the same set of co-running applications. We show that our method can predict application slowdown with an average relative error of 0.41% (maximum 1.8%) as well as bandwidth consumption. Our method is an average of 213x; faster than native execution of the applications for performance measurements.

Performance Variability due to Cache Sharing
Performance variations due to different execution overlapping from cache sharing effects.

Updated  2013-07-22 11:41:06 by David Black-Schaffer.