Detect black & white circles (frog oocytes) in image
With this kind of image, you will not get a perfect estimate of the number of oocytes. There are several ways to approach this, but if you want to do this for more experiments, I would definitely change the setting. Usually, I can talk to my fellow researcher and discuss whether or not something is possible. Since we cannot do this, let me just give you an idea and I let you decide if you want to try it.
One option I see here is to reverse the light settings. Currently, you just take a photo and the light is coming from the room. Change that in the following way:
What happens with this setting is that your eggs erase the light and since they seem to have the black color somewhere, they all should be consistently dark. Furthermore, the hope is that the light shines through the small spaces of the cells and it is then more easy to separate the dark spots they leave on the photo. Instead of getting headaches because of the black and white eggs, you will get only dark spots. The crucial parts are:
- you need some white paper that acts as diffusor to create a consistently bright underground. It is possible to get rid of inconsistent illumination, but if you can avoid it in the experimental settings, it will be much better
- the room should be dark and the light source beneath the table should be the only light source.
- of course you take a photo without the flash
Let me demonstrate this. Experimental setting 1:
- 1 IKEA GRÖNÖ table lamp
- 1 sheet of normal printing paper
- pieces of a Go game (unfortunately no oocytes available)
- 1 mobile phone
With your setting, the image would look like this
and with the lamp beneath the pieces, it looks like this
If this works in your case, I assume you will get much better peaks for all of your eggs and I hope, I can convince you that there is a good chance it will highly improve the image processing possibilities.
More about illumination and removing background
I forgot to mention one important thing we use when we want to have consistent lighting through a whole series of (microscopic) images. If you fix your camera with a tripod and make sure the light source doesn't move, you can remove the background almost completely in a very easy way:
- Turn the light source on and let it warm up for a while until it doesn't change anymore
- Prepare your table and camera but don't put the Petri dish on it
- Now, you take a so-called brightfield image: You make a photo of the table without anything on it. Important is to fix your camera settings. No auto-gain, brightness adjustment, auto-focus, etc. The easiest way is to adjust these settings by making the image of the Petri dish first and you adjust all things to have a nice photo. Then, take the Petri dish away and make the brightfield image from the table only.
- Now, you can use the brightfield image to subtract the background entirely from your Petri dish image.
A very good tutorial can be found here for microscope images. This should work for you as well. I wouldn't bother with the darkfield for the moment but removing the background with the brightfield will save you a lot of pain.
Your current image
For your current image, it will be hard to separate dark eggs that are close together without losing some of the white sides. Nevertheless, here is my straightforward hack. No optimization, all parameters just trial and error:
Finding the round mask by removing illumination inconsistencies and using the dark Petri dish ring:
img = Import["https://i.stack.imgur.com/AUuYL.jpg"]
With[{i = ColorNegate[img]},
mask = Dilation[Binarize[i - GaussianFilter[i, 100], .08], 3];
mask = Image@
Erosion[SelectComponents[FillingTransform[mask], "Area", -1], 40]
];
HighlightImage[img, mask]
Using a combination of a normal binarization to get an estimate for the egg positions with a local adaptive binarization to get better separated cells.
With[{i = ColorConvert[ColorNegate[img], "Grayscale"]},
m1 = Binarize[GaussianFilter[i, 2] - GaussianFilter[i, 400], .15]*
mask;
eggs = Erosion[LocalAdaptiveBinarize[i, 20, {1, 0, 0.1}], 0]*m1
]
Using the morphological components to find the center off all detected eggs. If eggs are clustered, there will be one point in the center of many.
pos = Values@ComponentMeasurements[
DeleteSmallComponents[eggs, 3],
"Medoid"];
HighlightImage[
img,
{Red, PointSize[0.005], Point[pos]}
]
Using Length[pos]
, we find that there are about 418 eggs in the image. I assume it's more 450-500 as many of the white eggs in crowded places where not recognized. With my proposed setting, I hope the accuracy can be improved.
Here's an estimate. There are definitely better ways to do this, perhaps someone else will shed more light.
First I crop the image, just for the sake of having smaller search space.
i2 = ImageCrop@
ImageMultiply[i,
ImageResize[ImagePad[Image@DiskMatrix[75], 25], ImageDimensions@i]]
Next, I crop a single match out of the image.
Now I divide the image into parts roughly the size of my match
, and determine the distance between the match
and each of these parts.
dists = MapAt[
ImageDistance[#, ColorConvert[match, "Grayscale"],
DistanceFunction -> "MeanPatternIntensity"] &,
ImagePartition[ColorConvert[i2, "Grayscale"],
UpTo[ImageDimensions@match*0.9]], {All, All}];
This gives us a list that looks like this:
Then, we binarize the list and count the components.
components =
SelectComponents[
Binarize[Image@
dists], #Elongation < .7 && #AdjacentBorderCount == 0 &]
Total@components
gives us 463. I haven't manually counted but it seems semi-correct.
If we overlay the two, we can see roughly what's getting counted:
Other, probably more rigorous methods might involve ImageCorrelate
. You could also probably train a neural network to do it if you came up with a clever way to generate training data, but that's more of a weekend project :)
Good luck!