Program stack and heap, how do they work?
This is my understanding of those questions:
Is stack also part of some page in main memory?
Yes, the stack is usually also stored in the process address space.
What happens when the program is moved to waiting state, where is the stack pointer, program counter and other info stored?
When the operative system takes the process from active to waiting, it stores all registers (that includes the stack pointer and the program counter) in the kernel's process table. Then, when it becomes active again, the OS copies all that information back into place.
Why stack grows down and heap grows up?
That because they usually have to share the same address space, and as a convenience they each begin on one end of the address space. Then they grow towards each other, giving that grow down-grow up behavior.
Can L1,L2 cache contain only one chunk of contiguous memory or can it have some part of stack and heap?
The CPU caches will store recently used chunks of the memory. Because both the stack and the heap are stored in main memory, the caches can contain portions of both.
Is stack also part of some page in main memory?
Yes - the stack is typically stored in the "low" addresses of memory and fills upward toward its upper limit. The heap is typically stored at the "top" of the address space and grows toward the stack.
What happens when the program is moved to waiting state? Where are the stack pointer, program counter and other info stored?
The O/S stores a "context" per running process. The operation of saving and restoring process state is called a "context switch".
Why stack grows down and heap grows up?
Just a convention as far as I know. The stack doesn't really "grow" it's got fixed allocation.
Can L1, L2 cache contain only one chunk of contiguous memory, or can it have some part of stack and heap?
Caches simply contain snapshots of parts of RAM that have been used (either recently or nearby). At any moment in time they can have memory from any part of the address space in them. What shows up where depends heavily on the structural parameters of the cache (block length, associativity, total size, etc.).
I would suggest Computer Architecture: A Quantitative Approach as a good reference on the underlying hardware and any book on Operating Systems for how the hardware is "managed".
3. Why stack grows down and heap grows up?
Note that on some systems (some HP systems, for example), the stack grows up instead of down. And on other systems (e.g., IBM/390) there is no real hardware stack at all, but rather a pool of pages that are dynamically allocated from user space memory.
The heap can, in general, grow in any direction, since it may contain many allocation and deallocation holes, so it is better to think of it as a loose collection of pages than as a LIFO-stack type structure. That being said, most heap implementations expand their space usage within a predetermined address range, growing and shrinking it as necessary.