Get Instant Help From 5000+ Experts For

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
CPU Simulator Functions: Adding a Fully Associative Cache


Our CPU simulator functions, but most real CPUs have at least one level of cache memory in the memory hierarchy. Let’s add a cache memory to our simulator so that we can run some experiments on settings related to cache memory to see how performance might be affected.

Add a fully associative cache to the simulator implemented in assignment 1. You must use the sample solution from assignment 1 as the basis for doing this assignment.

-There is only a data cache, all code fetches must go straight to the code area.

-The cache is empty at the beginning of a program’s execution. This means you have novalid cache directory entries. The actual contents of your cache memory can be filled with 0xFF, the same as main memory and code memory.

-The cache uses an LRU replacement policy.

-The cache uses a write back update policy. Make sure all data is written back to main memory at the end of execution (e.g., after the program you’re running crashes).

-The cache uses a demand fetch policy.

-Transfer all words of a block from memory to the cache (and vice versa) before continuing processing (i.e., the cache read/write is completed within a single phase of execution).

Upon completion, in addition to existing output, the simulator will print a report indicating the cache hits and misses and the hit rate achieved (prior to printing the memory contents).

-Remember that data memory and cache memory should be structured in the same way. Remember further that each entry in your cache memory is a block, not an individual word or byte. That means that your data memory should alsobe organized as a set of blocks rather than individual words or bytes.

-Our diagrams of cache memory and the cache directory show the cache directory immediately beside or mixed with cache memory. While implementing it that way would match the diagrams that we’ve been looking at, implementing cache memory that way would be painful. You are strongly encouragedto implement your cache directory and the corresponding cache memory as parallel arrays.

-For each entry in your cache, your cache directory should store:

*A valid bit (whether or not this is a realcache entry that you have explicitly loaded from main memory).

*A dirty bit (whether or not this cache entry has been written to, but hasn’t been written back to main memory).

*A tag (the tag that you extract from the address).

*A value that you use for tracking which entry in the directory is the least recently used.

-You are simulating hardware. Hardware does not have a dynamic memory allocator (memory size is fixed, uh, when the hardware is designed. It’s especiallyfixed in that it’s physically part of a CPU die). Use of malloc is explicitly forbidden for this assignment (i.e., you should not be using lists or queues).

-Looking forward to part 2: you should use preprocessor variables to set the number of blocks and the block size of your cache memory.

sales chat
sales chat