There is a performance-at-all-costs mentality at most of the nation's supercomputing centers that has resulted in significant and growing energy use. This unchecked consumption costs the government a considerable amount of money and wastes natural resources. Moreover, energy consumption and the resultant heat dissipation are becoming important performance-limiting factors that we believe will eventually come to bear on high-performance computing users. The goal of our research is to consume less energy (generating less heat) with no more than a modest performance penalty and to do so without burdening computational scientists. This talk first discusses the energy consumption and execution time of applications from a standard benchmark suite (NAS) on a power-scalable cluster. Our results show that many standard scientific applications executed on a such cluster can save energy by reducing the processor frequency and voltage without a significant increase in time. Additionally, this talk presents two techniques for dynamically controlling power consumption and increasing the energy efficiency of application without undue performance penalties. One technique reduces CPU performance in phases that are not CPU-bound. The other technique reduces the CPU performance on nodes that are on the critical path. In both these situations, the application performance is not heavily dependent on the CPU performance, thus the resultant energy savings comes with little time increase.