Pages

Thursday, 2 September 2010

Multicore, Power, and Performance

Dear Junior

Next time your data centre upgrade, you will of course get better performance out of your deployed systems. However, as there are several kinds of performance it can be interesting to think about what will happen to response time and capacity. Chances are capacity will improve, but response time might decline. The reason is spelled multicore.


In ancient days of yore, long gone, processors improved by running on higher clock frequency. Higher frequency made them able to process more stuff per unit of time. New processor meant higher frequency and thus better response time. If high capacity was needed, it was achieved by the processor thread-switching between several requests serving them all within reasonable response time.


Those days are gone. Nowadays high-speed processors are no longer in demand. The reason for this is power consumption. The power-consumption of a processor is roughly proportional to the frequency, squared. So for thrice the speed, you pay nine times the power. 


Slower Saves Power


There are of course several problems with high power consumption such as electricity bills and environmental issues. However, the driving force is cooling. Every single Joule that is fed into a processor in a data hall is transformed to heat. That heat needs to be transported out of the hall by the cooling system - and that is the limiting factor.


In other words — if you populate your data centre with high-speed processors you will hit cooling capacity at one point. However, if you use low-speed processors (frequency third of the high-speed), you can stuff in nine times more in the same data centre. So, nine times more, each giving a third of the MIPS compared to high-speed processor. Still you get three times more MIPS out of the same hall - good economy. 


Of course, the modern way of doing it is not to build several processors, but to stuff multiple cores onto the same chip — i e multicore. Still, the driver is the same - you get more MIPS out of you data centre by lots of low-frequency cores that out of substantially fewer high-frequency cores. 


This is not just theory. Several of the large web-sites of the world use new keys like MIPS/W or MIPS/m3 when benchmarking their data centres.


Actually, the same goes for laptops where heat must be limited to prevent user burns. There two slower cores on the same chip can give higher computational power and still radiate a lot less heat.


Better Capacity and Worse Response Time


Getting back to response time and capacity — how are these affected? Well, the data centre upgrade might consists of pulling out old single-core processors and replace them with quad-core processors with 20% lower frequency. Power consumption goes down, making it possible to stuff more processor into the same space. 


Looking at the data centre from the perspective of all clients we serve, or all transactions we process — we get a dramatic increase in capacity. However, from the perspective of a single request or transaction we experience a slower processor — so response time or latency will get worse.


Where processors and data centres earlier where sports cars getting faster and faster, they now rather resembles school busses that do not run nearly as fast but can carry a lot of people at the same time.


Of course this will affect the non-functional attributes of our systems running on these data centres. Unfortunately, the effect is not that straight-forward, but I am pretty sure we are slowly running into trouble and need to do something about it.


Yours


   Dan


ps This is one of the reasons it is important to keep apart response time and capacity, instead of stuffing them both under "performance".