filesystem for archiving
Btrfs has native support for snapshots, so you wouldn't have to use hard links for deduplication. You could recreate your current setup by creating a btrfs filesystem and loading it with the earliest revision that you need, and taking a snapshot, and then revving the repository forward to each point in time that you need a snapshot of and taking a snapshot at each step. This should be more efficient than hard links, and simpler to set up as well.
I also think (though I'm far from sure of this) that squashfs deduplicates files transparently, so even if it doesn't handle hard links, you'd still see benefits. If you never need to change the data in the filesystem, then squashfs is probably the way to go, since fsck could then be replaced by md5sum ;)
I would prefer XFS since I have very good experiences with this file system. But I really recommend, you make a test with your data and all filesystems suggested.
If it's abot fsck slowness, did you try ext4? They added a few features to it that make fsck really quick by not looking at unused inodes:
Fsck is a very slow operation, especially the first step: checking all the inodes in the file system. In Ext4, at the end of each group's inode table will be stored a list of unused inodes (with a checksum, for safety), so fsck will not check those inodes. The result is that total fsck time improves from 2 to 20 times, depending on the number of used inodes (http://kerneltrap.org/Linux/Improving_fsck_Speeds_in_Ext4). It must be noticed that it's fsck, and not Ext4, who will build the list of unused inodes. This means that you must run fsck to get the list of unused inodes built, and only the next fsck run will be faster (you need to pass a fsck in order to convert a Ext3 filesystem to Ext4 anyway). There's also a feature that takes part in this fsck speed up - "flexible block groups" - that also speeds up filesystem operations.