University of Wisconsin professor Mark Hill is working to make computers more efficient by finding hidden efficiencies in their architecture, an increasingly necessary focus for computer engineers as Moore's law approaches its practical limits. The performance of computer tasks is one area of concentration, and Hill times such tasks to determine overall speed and the duration of each individual step. He was successfully able to use paging selectively through utilization of a simpler address translation method for certain components of important applications. Cache misses were thus reduced to less than 1 percent, and such a solution would enable a user to do more with the same framework, shrinking their server requirements and saving money. "A small change to the operating system and hardware can bring big benefits," Hill observes. He advocates a more unified approach, and he believes the slowdown in Moore's law can be countered by sufficient numbers of hidden inefficiencies. "I think we're going to wring out a lot of inefficiencies and still get gains," Hill says. "They're not going to be like the large ones that you've seen before, but I hope that they're sufficient that we can still enable new creations, which is really what this is about."