Using .Net to deskew an image

Given the sample input, it is clear that you are not after image deskewing. This kind of operation is not going to correct the distortion you have, instead you need to perform a perspective transform. This can be clearly seen in the following figure. The four white rectangles represent the edges of your four black boxes, the yellow lines are the result of the connecting the black boxes. The yellow quadrilateral is not a skewed red one (the one you want to achieve).

enter image description here

So, if you can actually get the above figure, the problem gets a lot simpler. If you did not have the four corner boxes, you would need other four reference points, so they do help you a lot. After you get the image above, you know the four yellow corners, and then you just map them to the four red corners. This is the perspective transform you need to do, and according to your library there might be a ready function for that (there is one at least, check the comments to your question).

There are multiple ways to get to the image above, so I will simply describe a relatively simple one. First, binarize your grayscale image. To do that, I picked a simple global threshold of 100 (your image is in the range [0, 255]), which keeps the boxes and other details in the image (like the strong lines around the image). Intensities above or equal to 100 are set to 255, and below 100 is set to 0. But, since this is a printed image, how dark the boxes appear are very likely to vary. So you might need a better method here, something as simple as a morphological gradient could work potentially better. The second step is to eliminate irrelevant detail. To do that, perform a morphological closing with a 7x7 square (about 1% of the minimum between the width and the height of the input image). To get the border of the boxes, use a morphological erosion as in current_image - erosion(current_image) using an elementary 3x3 square. Now you have an image with the four white contours as above (this is assuming everything but the boxes were eliminated, a simplification of the other inputs I believe). To get the pixels of these white contours, you can do connected component labeling. With these 4 components, determine the top right one, top left one, bottom right one, and bottom left one. Now you can easily find the needed points to get the corners of the yellow rectangle. All these operations are readily available in AForge, so it is only a matter of translation the following code to C#:

import sys
import numpy
from PIL import Image, ImageOps, ImageDraw
from scipy.ndimage import morphology, label

# Read input image and convert to grayscale (if it is not yet).
orig = Image.open(sys.argv[1])
img = ImageOps.grayscale(orig)

# Convert PIL image to numpy array (minor implementation detail).
im = numpy.array(img)

# Binarize.
im[im < 100] = 0
im[im >= 100] = 255

# Eliminate undesidered details.
im = morphology.grey_closing(im, (7, 7))

# Border of boxes.
im = im - morphology.grey_erosion(im, (3, 3))

# Find the boxes by labeling them as connected components.
lbl, amount = label(im)
box = []
for i in range(1, amount + 1):
    py, px = numpy.nonzero(lbl == i) # Points in this connected component.
    # Corners of the boxes.
    box.append((px.min(), px.max(), py.min(), py.max()))
box = sorted(box)
# Now the first two elements in the box list contains the
# two left-most boxes, and the other two are the right-most
# boxes. It remains to stablish which ones are at top,
# and which at bottom.
top = []
bottom = []
for index in [0, 2]:
    if box[index][2] > box[index+1][2]:
        top.append(box[index + 1])
        bottom.append(box[index])
    else:
        top.append(box[index])
        bottom.append(box[index + 1])

# Pick the top left corner, top right corner,
# bottom right corner, and bottom left corner.
reference_corners = [
        (top[0][0], top[0][2]), (top[1][1], top[1][2]),
        (bottom[1][1], bottom[1][3]), (bottom[0][0], bottom[0][3])]

# Convert the image back to PIL (minor implementation detail).
img = Image.fromarray(im)
# Draw lines connecting the reference_corners for visualization purposes.
visual = img.convert('RGB')
draw = ImageDraw.Draw(visual)
draw.line(reference_corners + [reference_corners[0]], fill='yellow')
visual.save(sys.argv[2])

# Map the current quadrilateral to an axis-aligned rectangle.
min_x = min(x for x, y in reference_corners)
max_x = max(x for x, y in reference_corners)
min_y = min(y for x, y in reference_corners)
max_y = max(y for x, y in reference_corners)

# The red rectangle.
perfect_rect = [(min_x, min_y), (max_x, min_y), (max_x, max_y), (min_x, max_y)]

# Use these points to do the perspective transform.
print reference_corners
print perfect_rect

The final output of the code above with your input image is:

[(55, 30), (734, 26), (747, 1045), (41, 1036)]
[(41, 26), (747, 26), (747, 1045), (41, 1045)]

The first list of points describes the four corners of the yellow rectangle, and the second one is related to the red rectangle. To do the perspective transform, you can use AForge with the ready function. I used ImageMagick for simplicity as in:

convert input.png -distort Perspective "55,30,41,26 734,26,747,26 747,1045,747,1045 41,1036,41,1045" result.png

Which gives the alignment you are after (with blue lines found as before to better show the result):

enter image description here

You may notice that the left vertical blue line is not fully straight, in fact the two left-most boxes are unaligned by 1 pixel in the x axis. This may be corrected by a different interpolation used during the perspective transform.