Boosting Application Speeds
October 12, 2016 Leave a comment
A process called Dense Footprint Cache may help significantly speed up applications on computers and mobile devices. When a processor is running an application, it has to retrieve data which is stored “off processor” in device memory. This process takes time. In order to make it faster, processors have a cache of die-stacked dynamic random access memory that allows the processor to go to the cache and retrieve the data more quickly.
However, there is a decent chance that the data in question won’t be stored there, in which case the processor has to go to the memory anyway, which slows the whole process down. Under this new system though, the processor learns where data is stored, allowing it to access that data faster. Tests have shown that the average system is 9.5% faster than other, state-of-the-art computing methods, which is a noticeable improvement. Furthermore, the system allows the processor to skip over data that it knows is not in the cache, reducing “last level cache miss ratios” by 43%. As a bonus, the whole process uses 4.3% less energy than normal, which means slightly longer battery life, too.
While this technology (which is still new and not ready for the market) is unlikely to drastically change the way we compute, it will help, especially if it ends up in consumer devices like smart phones and tablets. Being able to use apps with even 9.5% more speed will improve customer use, and it will also allow more applications to function more quickly. This is the kind of thing that, bundled within next-generation smart phones, for example, makes people want to actually upgrade their devices. It’s the kind of improvement that actually means something, not like, say, removing the 3.5mm audio jack so that customers are forced to buy more expensive, more complicated headphones.