Case Study Undergraduate 878 words Human Written

Managing Contention for Shared Resources on Multicore Processors

Last reviewed: ~4 min read Personal Issues › Pain Management
80% visible
Read full paper →
Paper Overview

Managing Contention "Contention for shared resources significantly impedes the efficient operation of multicore processors" (Fedorova, 2009). The authors of "Managing Contention for Shared Resources on Multicore Processors" (Fedorova, 2009) found that shared cache contention as well as prefetching hardware and memory interconnects were all...

Writing Guide
Managing Time Effectively

Even if you're very dedicated to your studies, smart, and committed to doing well in college, you can run into problems if you're not good with time management. It's one of the most important parts of getting an education, especially if you're taking a heavy class...

Related Writing Guide

Read full writing guide

Related Writing Guides

Read Full Writing Guide

Full Paper Example 878 words · 80% shown · Sign up to read all

Managing Contention "Contention for shared resources significantly impedes the efficient operation of multicore processors" (Fedorova, 2009). The authors of "Managing Contention for Shared Resources on Multicore Processors" (Fedorova, 2009) found that shared cache contention as well as prefetching hardware and memory interconnects were all responsible for performance degradation. After implementing a pain, sensitivity and intensity, model to test applications, the authors discovered that high miss rate applications must be kept apart and not co-scheduled on the same domain (memory).

Therefore, the management of how the applications were scheduled by the scheduler would mitigate the performance degradation of the cache lines and the applications on the processors. The authors built a prototype scheduler, called Distributed Intensity Online (DIO) that distributes intensive (high latest level cache (LLC) miss rates) after measuring online miss rates of the application. With the execution of eight different workloads for testing, the DIO improved workload performance by 11% (Fedorova, 2009) with some applications showing 60-80% improvement with the worst case schedules.

The prefetching hardware was the application that showed the most improvement under DIO. It also shows potential for ensuring QoS (quality of service) for critical applications with a means of ensuring the worst scheduling assignments are never used. Another schedule that was used was the Power DI (Distributed Intensity) to test the power consumption. Power DI clusters incoming applications on as few machines as possible, except machines with memory intensive applications.

The concept is the same on a single machine only it clusters the applications on as few memory domains as possible. The effectiveness differed with the number of memory intensive applications. The greater the number of memory intensive applications, the greater number of domains, or machines, that would end up getting used. So, the greater the number of memory intensive applications, the greater amount of power that was used. Power DI was able to adjust to the properties of the workload to minimize the power used.

The authors found the dominant cause of performance degradation is contention of shared resources of front-side bus, prefetching resources, and memory controller. Applications that issue many cache misses will occupy the memory controller and the front-line bus, which hurts other applications that use that hardware and the applications themselves. Cache contention stems from two or more threads running on the same domain (memory). The cache consists of lines allocated to hold thread memory as the threads issue cache requests.

Threads share the last level cache (LLC), so when a thread requests a line not in the cache, cache miss, a new line gets allocated. When the cache is full, data must be evicted to free space for the new data, which hurts performance in the eviction process. According to (Zhoa, 2011), the primary function of the cache line size and application behavior is the amount of data sharing. Invalidations and misses occur from cores competing to access the same data or different items in the same cache line.

Frequent occurrence causes sharing-induced slow down. High miss rates exhibit false sharing, therefore, performance degradation. According to (Arteaga, n.d.), the nature of resource sharing causes contention. The resource distribution on cores and threads determines the core/thread performance. Scheduling on multiple machines can cause the cores and threads to compete for the same resources. Applications with similar behavior may compete for the same resources leaving other resources relatively idle. As a result, workload performance degrades where distributed resources are not utilized.

To manage contention and optimize performance, accurate modeling of impact of inter-process cache contention on performance and power consumption is required (Xu, 2010). This would require a cache contention aware assignment algorithm. High miss rate applications must be kept apart and not co-scheduled on the same domain. A high miss rate shows performance degradation where a lower miss rate improves performance. Applications aggressively using prefetching hardware will have high LLC miss rates. Therefore, the miss rate is a good indicator of the heavy use of prefetching hardware applications.

The authors recommend for the use of prefetching hardware to be kept to a minimum and co-scheduled with lower miss rate applications. The cache, front-line bus, and memory controller are equally important in how the algorithms are scheduled to optimize performance. The LLC miss rate.

176 words remaining — Conclusions

You're 80% through this paper

The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.

$1 full access trial
130,000+ paper examples AI writing assistant included Citation generator Cancel anytime
Sources Used in This Paper
source cited in this paper
5 sources cited in this paper
Sign up to view the full reference list — includes live links and archived copies where available.
Cite This Paper
"Managing Contention For Shared Resources On Multicore Processors" (2013, November 30) Retrieved April 21, 2026, from
https://www.paperdue.com/essay/managing-contention-for-shared-resources-178473

Always verify citation format against your institution's current style guide.

80% of this paper shown 176 words remaining