Difference between "Edge Detection" and "Image Contours"
The main difference between finding edges and countours is that if you run finding edges the output is new image. In this new (edge image) image you will have highlighted edges. There are many algorithms for detecting edges look at wiki see also.
For example Sobel operator gives smooth "foggy" results. In your particular case, the catch is that you are using Canny edge detector. This one makes few steps further than other detectors. It actually runs further edge refinement steps. Output of the Canny detector is thus binary image, with 1 px wide lines in place of edges.
On the other hand Contours
algorithm processes arbitrary binary image. So if you put in white filled square on black background. After running Contours
algorithm, you would get white empty square, just the borders.
Other added bonus of contour detection is, it actually returns set of points! That's great, because you can use these points further on for some processing.
In your particular case, it's only coincidence that both images match. It not rule, and in your case, it's because of unique property of Canny algorithm.
Edges are computed as points that are extrema of the image gradient in the direction of the gradient. if it helps, you can think of them as the min and max points in a 1D function. The point is, edge pixels are a local notion: they just point out a significant difference between neighbouring pixels.
Contours are often obtained from edges, but they are aimed at being object contours. Thus, they need to be closed curves. You can think of them as boundaries (some Image Processing algorithms & librarires call them like that). When they are obtained from edges, you need to connect the edges in order to obtain a closed contour.