Shared memory and IPC

The distinction here is IPC mechanisms for signalling versus shared state.

Signalling (signals, message queues, pipes, etc.) is appropriate for information that tends to be short, timely and directed. Events over these mechanisms tend to wake up or interrupt another program. The analogy would be, "what would one program SMS to another?"

  • Hey, I added a new entry to the hash table!
  • Hey, I finished that work you asked me to do!
  • Hey, here's a picture of my cat. Isn't he cute?
  • Hey, would you like to go out, tonight? There's this new place called the hard drive.

Shared memory, compared with the above, is more effective for sharing relatively large, stable objects that change in small parts or are read repeatedly. Programs might consult shared memory from time to time or after receiving some other signal. Consider, what would a family of programs write on a (large) whiteboard in their home's kitchen?

  • Our favorite recipes.
  • Things we know.
  • Our friends' phone numbers and other contact information.
  • The latest manuscript of our family's illustrious history, organized by prison time served.

With these examples, you might say that shared memory is closer to a file than to an IPC mechanism in the strictest sense, with the obvious exceptions that shared memory is

  1. Random access, whereas files are sequential.
  2. Volatile, whereas files tend to survive program crashes.

An example of where you want shared memory is a shared hash table (or btree or other compound structure). You could have every process receive update messages and update a private copy of the structure, or you can store the hash table in shared memory and use semaphores for locking.


Shared memory is very fast - that is the main advantage and reason you would use it. You can use part of the memory to keep flags/timestamps regarding the data validity, but you can use other forms of IPC for signaling if you want to avoid polling the shared memory.