MSI/MESI: How can we get "read miss" in shared state?
For the first question, while there is a data block (in shared state) in the cache, it is the wrong block (i.e., the tag does not match), so there is a cache miss. On a cache miss, the cache still needs to writeback data to memory if it is in the modified state.
For the second question, each bus is providing address and request information (read or write) to the cache; the system bus is providing this from a remote processor. The hit or miss is whether the address provided is a hit in the cache. Requests on the system bus are filtered by the remote processor's cache.
On a read by processor 1 (P1), if the P1's cache has a hit, no signal needs to be sent to the system bus. On a write by P1, if the P1's cache has a hit and the state is exclusive or modified, then no signals need to be sent to the system bus, but if state in P1's cache is shared, then the other (P0) cache must told (via the system bus) that P1 is performing a write and that any entry in P0's cache for that address must be invalidated.
(Note that "shared" state does not necessarily mean that the block of memory is present in some other cache. Most snoop-based protocols allow silent [no system bus communication] dropping of cache blocks in shared state.)
(By the way, the MESI protocol given by that book is a bit unusual. It is more common for Exclusive state to be entered by a read miss that found no other caches with the block [this allows silent update to modified on a later write] and for writes not to use write through to memory on a hit that was in shared state.)
For the third question, cache coherence protocols only address coherence not consistency. The consistency model that the hardware provides determines what kinds of activity can be buffered and completed out of order (in a way that is visible to software). E.g., it helps performance if a write miss does not force any following read hits to wait until the write notification has been seen by all remote caches, but such can violate sequential consistency.
Here is an example how sequential consistency can prevent buffering of writes:
P0 reads "B_old" at location B (cache hit), writes "A_new" to location A (a cache miss), then reads location B.
P1 reads "A_old" at location A (cache hit), writes "B_new" to location B (a cache miss), then reads location A.
If writes are buffered and reads are allowed to proceed without waiting for the preceding write to be recognized by the other cache, the P0's second read of B can read "B_old" (since it will still be a cache hit) and P1's second read of A can read "A_old" (since P0's write has not yet been seen). There is no way to construct a sequential ordering of these memory accesses, so sequential consistency would be violated.
However, if P0's second read of B waited until P0's write to A was recognized by P1 (and P1's second read of A waited until P1's write to B was recognized by P0), then a sequential ordering would be possible.
If P1 saw P0's write to A before its write to B, then sequential ordering is possible, including:
- P0.read "B_old", P1.read "A_old", P0.write "A_new", P1.write "B_new", P0.read "B_new", P1.read "A_new"
- P0.read "B_old", P1.read "A_old", P0.write "A_new", P0.read "B_old", P1.write "B_new", P1.read "A_new"
Note that hardware can speculate that ordering requirements are not violated and reorder the completion of memory accesses; however, it needs to be able to handle incorrect speculation (e.g., by restoration to a checkpoint). (A classic early paper on this is "Is SC + ILP = RC?" [PDF].)