Importance of fsck at boot with Journalled filesystems?
I'm answering this in the general context of "journalled filesystems".
I think that if you did a number of "unclean shutdowns" (by pulling the power cord or something) sooner or later you'd get to a filesystem state that would require fsck
or the moral equivalent of fsck, xfs_repair
. The ext4
fileystsm on my laptop for the most part just replays the journal on every reboot, clean shutdowns included, but every once in a while, it does a full-on fsck
.
But ask yourself what "replaying the journal" accomplishes. Replaying a journal just ensures that the diskblocks of the rest of the fileystem match the ordering that the journal entries demand. Replaying a journal amounts to a small fsck
, or to parts of a full on fsck
.
I think there's some verbal sleight of hand going on: replaying a journal does part of what traditional fsck
does, and xfs_repair
is exactly what the same kind of program that e2fs.fsck
(or any other filesystem's fsck
) is. The XFS people just believed or their experience led them to not running xfs_repair
on every boot, just to replaying the journal.
help ensure the file-system is in a consistent state after an unclean shutdown
First thing of note is that XFS, reiser and most configurations of ext only implement meta-data journalling, which is all about avoiding fsck. The journal is not always replayed on start up - it may be discarded if it's incomplete.
There are systems which support full data journalling - but in practice the level of assurance these give over just meta-data journalling is very small in real world scenarios.
So an 'inconsistent state', and the problems fixed by fsck, are mismatches between the meta-data and the files themselves. To avoid this, the OS writes out the proposed meta data changes to the journal, then writes the actual data to disk, then applies the meta data changes which are replicated in the journal to the disk. The only catch with this is that the disk controller will buffer and potentially reorder the requests. To avoid this, most journalling filesystems implement barriers: they separate each operation and wait for the disk to acknowledge that it has completed the operation. But many modern disks actually acknowledge completion of writes before the data is committed. Hence, things can get messy.
Is a fsck still needed after an unclean shutdown and why
Most filesystems maintain a mount count - once this count, is reached a full fsck will be triggered at the next attempt to mount the disk. The reason being that disk data may be corrupted even when it's not explicitly being written to, even without bugs in the software. psusi's comment above is wrong.
There is no need to fsck a journaling filesystem simply because of an unclean shutdown.
The entire reason for enduring the runtime performance penalty of metadata journaling is to ensure that the filesystem can be made 100% consistent again by automatically replaying the metadata log on the next mount, if the filesystem wasn't cleanly unmounted.
fsck's only role is to ensure metadata consistency, so it is redundant to run fsck simply because the filesystem wasn't properly unmounted.
A journaling filesystem can get corrupted for other reasons, though - hardware failure, driver bugs, admin errors, etc. - so fsck tools are certainly necessary. There's just no reason to invoke them solely due to an unclean shutdown.