Binary stars' apparent magnitude
The bigger dip comes when the cooler star passes in front of the hotter object. The reason the dip is larger in this case is the amount of light given off from the area of the hotter star which is covered by the cooler star is much larger than the amount of light given off by the same area on the cooler star. Thus when the cool star passes in front of the hotter star, a lot of light is blocked off and the dip is large. While when the hotter star passes in front of the cooler star, not that much light has been lost and the dip is smaller.
The closer in temperature (and brightness) the two stars are the more equal the size of the dips. In this case we have a very faint star orbiting a very bright one since the primary eclise (at phase 0.0 == 1.0) drop is 1.6 magnitudes while the secondary dip (at phase 0.5) is very small, less than 0.1 magnitudes.
If you think about it logically, it should be easy to visualize.
In fact, the brighter star does not have to be larger necessarily. It could very well be smaller- perhaps the larger star is a red giant, while the smaller star is a blue main sequence, which has higher luminosity.
In any case, the middle point of the M
occurs when the star with a lower surface temperature goes behind the star with a higher surface temperature, and the sides are when the opposite happens. Here's why: the amount of light that is given off per square meter of a star's surface is directly dependent on the star's surface temperature. The surface temperature is not always related to the star's size (if both stars are main sequence, then the larger star will have the higher surface temperature, but if one of the stars is a giant, that may not be the case - giant stars are relatively cool in comparison). Whenever an eclipse occurs, no matter which star is being eclipsed, the same amount of surface area is covered up (equal to the size of the smaller star). Thus, since the same amount of surface area is covered both ways, the star that has the higher surface temperature will give the deeper dips on the graph when it is eclipsed.
What this means is that the brighter star is not necessarily the one with the higher surface temperature. Here's an example: suppose you have an insanely large supergiant star that has 100,000 times the luminosity of the sun. Nonetheless, it is fairly cool - its high luminosity is due to its size. We also have a relatively small, but extremely hot type O blue star that is 50,000 times the luminosity of the sun. Now, the supergiant, even though it has a lower surface temperature, is still brighter overall. However, the same principle still applies: the smaller central dip of the M
will occur when the blue star is covering up the supergiant (in other words, when the dimmer star covers up the brighter star), and the larger dips will occur when the supergiant covers up the blue star.
See this nice eclipsing binary simulator to get a visual idea of how it works.
In the graph given in your question, the middle of the 'M' dip represent the eclipse of the brighter star against the dimmer one. I see an animation from Wikipedia at http://en.wikipedia.org/wiki/Binary_star#Eclipsing_binaries that shows a small and large stare eclipsing each other. When the small one is in front you get big dip in magnitude and when the large one is in front you get small dip in magnitude. The colors and luminosity per unit area of sky of the two stars vary, the small star is less concentrated brightness, hence, when it is in front, it is blocking the otherwise higher concentration of light from the star behind it for the same area and you get a bigger dip than when the brighter one per unit area of sky blocks the dimmer one behind it.