Can overwritten files be recovered?
The answer is "Probably yes, but it depends on the filesystem type, and timing."
None of those three examples will overwrite the physical data blocks of old_file or existing_file, except by chance.
mv new_file old_file
. This will unlink old_file. If there are additional hard links to old_file, the blocks will remain unchanged in those remaining links. Otherwise, the blocks will generally (it depends on the filesystem type) be placed on a free list. Then, if themv
requires copying (a opposed to just moving directory entries), new blocks will be allocated asmv
writes.These newly-allocated blocks may or may not be the same ones that were just freed. On filesystems like UFS, blocks are allocated, if possible, from the same cylinder group as the directory the file was created in. So there's a chance that unlinking a file from a directory and creating a file in that same directory will re-use (and overwrite) some of the same blocks that were just freed. This is why the standard advice to people who accidentally remove a file is to not write any new data to files in their directory tree (and preferably not to the entire filesystem) until someone can attempt file recovery.
cp new_file old_file
will do the following (you can usestrace
to see the system calls):open("old_file", O_WRONLY|O_TRUNC) = 4
The O_TRUNC flag will cause all the data blocks to be freed, just like
mv
did above. And as above, they will generally be added to a free list, and may or may not get reused by the subsequent writes done by thecp
command.vi existing_file
. Ifvi
is actuallyvim
, the:x
command does the following:unlink("existing_file~") = -1 ENOENT (No such file or directory)
rename("existing_file", "existing_file~") = 0
open("existing_file", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 3
So it doesn't even remove the old data; the data is preserved in a backup file.
On FreeBSD,
vi
doesopen("existing_file",O_WRONLY|O_CREAT|O_TRUNC, 0664)
, which will have the same semantics ascp
, above.
You can recover some or all of the data without special programs; all you need is grep
and dd
, and access to the raw device.
For small text files, the single grep
command in the answer from @Steven D in the question you linked to is the easiest way:
grep -i -a -B100 -A100 'text in the deleted file' /dev/sda1
But for larger files that may be in multiple non-contiguous blocks, I do this:
grep -a -b "text in the deleted file" /dev/sda1
13813610612:this is some text in the deleted file
which will give you the offset in bytes of the matching line. Follow this with a series of dd
commands, starting with
dd if=/dev/sda1 count=1 skip=$(expr 13813610612 / 512)
You'd also want to read some blocks before and after that block. On UFS, file blocks are usually 8KB and are usually allocated fairly contiguously, a single file's blocks being interleaved alternately with 8KB blocks from other files or free space. The tail of a file on UFS is up to 7 1KB fragments, which may or may not be contiguous.
Of course, on file systems that compress or encrypt data, recovery might not be this straightforward.
There are actually very few utilities in Unix that will overwrite an existing file's data blocks. One that comes to mind is dd conv=notrunc
. Another is shred
.
I'm going to say no (with a giant asterisk).
Think about how data is laid on a disk. You have blocks which contain data and point to the next block (if there is one).
When you overwrite data you are changing the block contents (and if you are extending the file all the ending marker). So nothing should be able to be recovered (see below).
If you shorten the file, then you are loosing the old blocks and they will soon be recycled. If you're a programmer, think of a linked list where you "lose" half of your list without doing a free/delete. That data is still there, but good luck finding it.
Something that might be interesting to think about is fragmentation.
Fragmentation occurs when you have "holes" of non-contiguous data on your disk.This can be caused by modifying files such that you extend or shorten them and they no longer fit in their original spot on the disk.
In the event of having a file grow past its original size (it needs to move at this point), depending on your filesystem you may copy the entire file to a new location where the old data would still be there (but marked as free) or you just change the old ending pointer and have it point to a new location (this will lead to thrashing).
The long story short, your data is probably lost (without going through an extreme forensic process where you look at it under a microscope); however, there is a chance that it is still there.
Make sure you have enough disk space in /var/tmp or somewhere big.
Try
grep -i -a -B100 -A100 'a string unique to your file' /dev/sda1 |
strings > /var/tmp/my-recovered-file
where /dev/sda1 would be your disk on your system.
Then search my-recovered-file for you string.
It might mostly be there, If you find it check for missing linespaces, brackets , sysmbols etc.
Use a search word from your file that is fairly unqiue or string that will cut down the amount of data in file. If you search for a word such as "echo" you will get back loads of strings as the system will have lots of files with the word echo in them.