Rules & Criteria

Competition Rules

The competition will proceed as follows. The contestants will be responsible for implementing and evaluating their algorithm in the provided framework. The framework itself should not be modified (except for the contestant’s prefetcher file that implements their code prefetching algorithm). Contestants’ prefetcher C++ file will be compiled and run with the original version of the framework. The contestants will be ranked on the basis of the measured performance of their prefetching algorithms, relative to a baseline with code prefetching disabled. The baseline and contestants’ performance will be evaluated with the next_line L1 data prefetcher and the spp_dev L2 data prefetcher, both included with ChampSim.  Each contestant will receive a single score that measures the geometric mean of their prefetching algorithm speedups across a set of benchmarks.

In addition to the information given to you through the prefetcher functions’ interface, you may access this information from the simulator:

current_core_cycle[cpu] — Current simulation cycle

MSHR.occupancy and MSHR.SIZE — Per-cache level MSHR resource availability

PQ.occupancy and PQ.SIZE — Per-cache level prefetch queue resource availability

get_set(cl_address) and get_way(cl_address, set) — Set and way information for a cacheline. NOTE: Two warnings about these functions. First, these should not be used to gain oracular knowledge of the contents of the cache. So for example you shouldn’t use these functions to filter out prefetch candidates, and you shouldn’t scan the contents of the cache looking for prefetchable patterns. Instead, you should only use these functions on the addr argument of the prefetcher_operate() functions. Reviewers will be on the lookout for any abuses of these functions. Second, these functions expect a cache line address, and the addr argument in the prefetcher_operate() functions is a byte address, so you will need to call the set and way functions with addr>>LOG2_BLOCK_SIZE.

Your prefetchers should be driven only by the information passed to them as arguments in the prefetching functions, plus the above exceptions. Because some of the evaluation traces are public, your prefetchers should not attempt to identify which trace is running and configure itself specifically for that trace. Doing so may cause your submission to be disqualified. Instead, your prefetcher should be versatile enough to deal with a variety of different workloads.

Also, because of the way that ChampSim allocates physical pages to virtual addresses, you are restricted from using C’s built-in random number generation functions.  Calling these functions is prohibited in IPC1, and if your code does use them, you will be asked to re-write the RNG portion of your code.  Please consider using an alternative, like an Xorshift scheme, if you need RNGs.

Your code prefetcher has a storage budget of 128 KB. There is no logical complexity budget, but it would be nice if you could discuss how your prefetcher could be implemented in a real design in your submitted paper.

We will evaluate your prefetchers using the hashed perceptron branch predictor and LRU LLC cache replacement algorithms included with ChampSim, building the simulator with these commands:

./build_champsim.sh hashed_perceptron <your_l1i_prefetcher_here> next_line spp_dev no lru 1

Acceptance Criteria

In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which performance ranking is a key component, but not the sole component. To be considered, submissions must conform to the submission requirements described above.

Submissions will be selected to appear in the workshop on the basis of the performance ranking, novelty, and overall quality of the paper and commented code. Novelty is not a strict requirement. For example, a contestant may submit his/her previously published prefetcher or make incremental enhancements to previously proposed prefetchers. In such cases, performance is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of the new results on the common framework, etc.). Conversely, a very novel submission that is not necessarily a top performer in the competition will be considered not just from a performance standpoint, but also on the basis of insights etc.