Performance impact of running different filesystems on a single Linux server
Solution 1:
Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.
You have to remember that data between different mount points is unshareable too.
While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop
for a system running 3 different file systems (XFS, ext4, btrfs):
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 42882 42460 99% 0.70K 1866 23 29856K shmem_inode_cache 14483 13872 95% 0.90K 855 17 13680K ext4_inode_cache 4096 4096 100% 0.02K 16 256 64K jbd2_revoke_table_s 2826 1136 40% 0.94K 167 17 2672K xfs_inode 1664 1664 100% 0.03K 13 128 52K jbd2_revoke_record_ 1333 886 66% 1.01K 43 31 1376K btrfs_inode_cache (many other objects)
As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.
Solution 2:
I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.
[root@Lancaster ~]# mount
/dev/cciss/c0d0p2 on / type ext4 (rw)
/dev/cciss/c0d0p7 on /tmp type ext4 (rw,nobarrier)
/dev/cciss/c0d0p3 on /usr type ext4 (rw,nobarrier)
/dev/cciss/c0d0p6 on /var type ext4 (rw,nobarrier)
vol2/images on /images type zfs (rw,xattr)
vol1/ppro on /ppro type zfs (rw,noatime,xattr)
vol3/Lancaster_Test on /srv/Lancaster_Test type zfs (rw,noatime,xattr)
Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?