Software Radio In Cell Phones
The memory hierarchy of high performance and embedded processors has been shown to be one of the major energy consumers. Extrapolating the current trends, this portion is likely to be increased in the near future. In this paper, a technique is proposed which uses an additional mini cache, called the L0-cache, located between the I-cache and the CPU core. This mechanism can provide the instruction stream to the data path, and when managed properly, it can efficiently eliminate the need for high utilization of the more expensive I-cache.
Five techniques are proposed and evaluated which are used to the dynamic analysis of the program instruction access behavior and to proactively guide the L0-cache. The basic idea is that only the most frequently executed portion of the code should be stored in the L0-cache, since this is where the program spends most of its time.
Results of the experiments indicate that more than 60% of the dissipated energy in the I-cache subsystem can be saved.
INTRODUCTION
In recent years, power dissipation has become one of the major design concerns for the microprocessor industry. The shrinking device size and the large number of devices packed in a chip die coupled with large operating frequencies, have led to unacceptably high levels of power dissipation. The problem of wasted power caused by unnecessary activities in various parts of the CPU during code execution has traditionally been ignored in code optimization and architectural design.
Higher frequencies and large transistor counts more than offset the lower voltages and the smaller the devices and they result in large power consumption in a newest version in a processor family.
If you like this please Link Back to this article...
Labels:
Electronics Engineering