Finding duplicate files and replace them with symlinks
If you don't fancy much scripting then I can recommend rdfind. Which will scan given directories for duplicate files and replace them with either hard or symbolic links. I've used it for deduplicating my Ruby gems directory with great success. It's available in Debian/Ubuntu.
I had a similar situation, but in my case the symbolic link should point to a relative path so I wrote this python script to do the trick:
#!/usr/bin/env python
# Reads fdupes(-r -1) output and create relative symbolic links for each duplicate
# usage: fdupes -r1 . | ./lndupes.py
import os
from os.path import dirname, relpath, basename, join
import sys
lines = sys.stdin.readlines()
for line in lines:
files = line.strip().split(' ')
first = files[0]
print "First: %s "% first
for dup in files[1:]:
rel = os.path.relpath(dirname(first), dirname(dup))
print "Linking duplicate: %s to %s" % (dup, join(rel,basename(first)))
os.unlink(dup)
os.symlink(join(rel,basename(first)), dup)
For each input line ( which is a list of files ) the script splits the file list (whitespace separated), gets the relative path from each file to the first one and then creates the symlink.
First; Is there a reason you need to use symlinks and not the usual hardlinks? I am having a hard time understanding the need for symlinks with relative paths. Here is how I would solve this problem:
I think the Debian (Ubuntu) version of fdupes can replace duplicates with hard
links using the -L
option, but I don't have a Debian installation to verify
this.
If you do not have a version with the -L
option you can use this tiny bash script I found on commandlinefu.
Note that this syntax will only work in bash.
fdupes -r -1 path | while read line; do master=""; for file in ${line[*]}; do if [ "x${master}" == "x" ]; then master=$file; else ln -f "${master}" "${file}"; fi; done; done
The above command will find all duplicate files in "path" and replace them with
hardlinks. You can verify this by running ls -ilR
and looking at the inode
number. Here is a samle with ten identical files:
$ ls -ilR
total 20
3094308 -rw------- 1 username group 5 Sep 14 17:21 file
3094311 -rw------- 1 username group 5 Sep 14 17:21 file2
3094312 -rw------- 1 username group 5 Sep 14 17:21 file3
3094313 -rw------- 1 username group 5 Sep 14 17:21 file4
3094314 -rw------- 1 username group 5 Sep 14 17:21 file5
3094315 drwx------ 1 username group 48 Sep 14 17:22 subdirectory
./subdirectory:
total 20
3094316 -rw------- 1 username group 5 Sep 14 17:22 file
3094332 -rw------- 1 username group 5 Sep 14 17:22 file2
3094345 -rw------- 1 username group 5 Sep 14 17:22 file3
3094346 -rw------- 1 username group 5 Sep 14 17:22 file4
3094347 -rw------- 1 username group 5 Sep 14 17:22 file5
All the files have separate inode numbers, making them separate files. Now lets deduplicate them:
$ fdupes -r -1 . | while read line; do j="0"; for file in ${line[*]}; do if [ "$j" == "0" ]; then j="1"; else ln -f ${line// .*/} $file; fi; done; done
$ ls -ilR
.:
total 20
3094308 -rw------- 10 username group 5 Sep 14 17:21 file
3094308 -rw------- 10 username group 5 Sep 14 17:21 file2
3094308 -rw------- 10 username group 5 Sep 14 17:21 file3
3094308 -rw------- 10 username group 5 Sep 14 17:21 file4
3094308 -rw------- 10 username group 5 Sep 14 17:21 file5
3094315 drwx------ 1 username group 48 Sep 14 17:24 subdirectory
./subdirectory:
total 20
3094308 -rw------- 10 username group 5 Sep 14 17:21 file
3094308 -rw------- 10 username group 5 Sep 14 17:21 file2
3094308 -rw------- 10 username group 5 Sep 14 17:21 file3
3094308 -rw------- 10 username group 5 Sep 14 17:21 file4
3094308 -rw------- 10 username group 5 Sep 14 17:21 file5
The files now all have the same inode number, meaning they all point to the same physical data on disk.
I hope this solves your problem or at least points you in the right direction!