Is image blurring an unsafe method to obfuscate information in images?
Is it possible to "de-blur" the image, if you know the algorithm and the setting, or by trial & error?
Here, I assume we are only considering images which were blurred using a filter applied to the image, and not as a result of a poor capture (motion/optical blur).
Deblurring definitely is possible, and you will see support in many image processing tools. However, blurring intentionally reduces the amount of information in the image, so to truly get back the original image could require "brute force", whereby a (humungously) large number of candidate images are generated, which all "blur" to the same final image.
Different types of blur have different losses, but it is possible to reverse all of them (albeit expensively). The cost of deblurring and the number of possible outcomes depends on the number of passes taken by the blur filter, and the number of neighbors considered while blurring. Once deblurred, many tools and services should be able to automatically remove many of the outcomes, based on knowing what type of image it is.
For instance, this blog post talks about why blurring content with a low amount of entropy (e.g. check books) is much less secure than blurring something like a human face.
In short, it is indeed possible to get back an image that if "blurred" will result in the same image that you provided. But you cannot guarantee that the deblurred image is the only valid deblurred version (you will need some domain knowledge and image analysis like matching edges, objects making semantic sense).
For the naked eye, it could be difficult to figure out the blurred content. But could the blurring be "reverse engineered" to reveal the original image, or at least something that is recognizable?
It is possible that blurring does not fundamentally transform the "signature" of an image, such that the histogram is similar and allows matching. In your case, the human eye can actually make out that this could have been the Google logo (familiar colors) but the histogram is quite different. Google itself can't identify the image and you can study the histogram and color clusters using this online tool-- the images are quite different.
If probably would be safer if you were to choose to black out the sensitive content (see post here)
I wish these things weren't possible (e.g. I used to try to go as fast as possible near speed traps so that motion blur would hide my number plates, but it never works anymore). Tools to deblur are fairly common now (e.g. Blurity) though they don't work as well with small computer-generated images (less information) as they do with photographs (see sample of what I recovered).
In terms of more references, the first chapter of Deblurring Images: Matrices, Spectra, and Filtering by Per Christian Hansen, James G. Nagy, and Dianne P. O’Leary is a really good introduction. It talks about how noise and other factors make recovery of the exact original image impossible: Unfortunately there is no hope that we can recover the original image exactly! but then goes about describing how you can get a close match.
This survey compares different techniques used in forensic image reconstruction (it's almost 20 years old, so it focuses on fundamentals).
Finally, a link to Schneier's blog where this is discussed to some detail.
Yes, blurring is an unsafe way to censor data in images.
There are software that can easily reverse algorithmic blurring like gaussian blurs, to fairly legible result. Often enough to identify objects/read texts.
It depends on two things: the image itself (amount of info), and the blur used (type+amount).
The Gaussian blur you mentioned re-distributes contrast (info) from where it's most or least concentrated into a diffusing circle around the contrast; more towards the center, less and less as you approach the edge of the circle (aka blur radius).
Instead of a digital image, consider a sand art image of a checkerboard on a rickety table. If you pound your fist down on the table, you mimic a Gaussian blur, which should round out the squares, leaving behind connected overlapping circles. Looking at that messy table, you could still probably conclude that it was a checkerboard before the shakeup.
On the other hand, if you pounded the side of the table, you simulate a motion blur. If the distance of the jolt / inertia of the sand grains exceeds the width of the checkerboard squares, the table will be uniformly covered in sand, and it will be impossible to say if the pre-shake design was a checkerboard, stripes, or an already uniform distribution.
If you only have a Gaussian blur available, and you want to obscure text, then you should blur by twice the line height and then posturize the image. Blurring spreads out big details into fine details, while posturizing discards fine details. You can also use something more dramatic that discards fine details to obscure the blurred image, reducing the color depth, crunching the levels, over-compressing, etc.
In short, if the details are spread out and then fine details discarded, there's simply not enough information left to reliably recover the image.