Automatic Performance Model Construction for the Fast Software Exploration of New Hardware Designs
John Cavazos, Christophe Dubach, Felix Agakov, Edwin Bonilla, Michael O'Boyle, Grigori Fursin and Olivier Temam
In: Int. Conf. on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 23-25 Oct 2006, Seoul, Korea.
Developing an optimizing compiler for a newly proposed architecture is extremely difficult when there is only a simulator of the machine available. Designing such a compiler requires running many experiments in order to understand how different optimizations interact. Given that simulators are orders of magnitude slower than real processors, such experiments are highly restricted. This paper develops a technique to automatically build a performance model for predicting the impact of program transformations on any architecture, based on a limited number of automatically selected runs. As a result, the time for evaluating the impact of any compiler optimization in early design stages can be drastically reduced such that all selected potential compiler optimizations can be evaluated. This is achieved by first evaluating a small set of sample compiler optimizations on a prior set of benchmarks in order to train a model, followed by a very small number of evaluations, or probes, of the target program.
We show that by training on less than 0.7% of all possible transformations
(640 samples collected from 10 benchmarks out of 880000 possible samples, 88000 per training benchmark) and probing the new program on only 4 transformations, we can predict the performance of all program transformations with an error of just 7.3% on average. As each prediction takes almost no time to generate, this scheme provides an accurate method of evaluating compiler performance, which is several orders of magnitude faster than current approaches.