There is a performance-at-all-costs mentality at most of the nation's supercomputing centers that has resulted in significant and growing energy use. This unchecked consumption costs the government a considerable amount of money and wastes natural resources. Moreover, energy consumption and the resultant heat dissipation are becoming important performance-limiting factors that we believe will eventually come to bear on high-performance computing users. The goal of our research is to consume less energy (generating less heat) with no more than a modest performance penalty and to do so without burdening computational scientists. This talk investigates the energy consumption and execution time of applications from a standard benchmark suite (NAS) on a power-scalable cluster. Our results show that many standard scientific applications executed on a such cluster can save energy by scaling the processor down to lower energy levels, without a significant increase in time. Additionally, this talk presents several runtime techniques for controlling power consumption and increasing the energy efficiency of application without undue performance penalties. Furthermore, this talk shows how to both consume less energy and increase performance in a cluster by increasing the parallelism (more nodes) while simultaneously decreasing the individual node performance.