Finding and removing duplicate files in osx with a script
Another option is to use fdupes:
brew install fdupes
fdupes -r .
fdupes -r .
finds duplicate files recursively under the current directory. Add -d
to delete the duplicates — you'll be prompted which files to keep; if instead you add -dN
, fdupes will always keep the first file and delete other files.
Firstly, you'll have to reorder the first command line so the order of files found by the find command is maintained:
find . -size 20 ! -type d -exec cksum {} \; | tee /tmp/f.tmp | cut -f 1,2 -d ‘ ‘ | sort | uniq -d | grep -hif – /tmp/f.tmp > duplicates.txt
(Note: for testing purposes in my machine I used find . -type f -exec cksum {} \;
)
Secondly, one way to print all but the first duplicate is by use of an auxiliary file, let's say /tmp/f2.tmp
. Then we could do something like:
while read line; do
checksum=$(echo "$line" | cut -f 1,2 -d' ')
file=$(echo "$line" | cut -f 3 -d' ')
if grep "$checksum" /tmp/f2.tmp > /dev/null; then
# /tmp/f2.tmp already contains the checksum
# print the file name
# (printf is safer than echo, when for example "$file" starts with "-")
printf %s\\n "$file"
else
echo "$checksum" >> /tmp/f2.tmp
fi
done < duplicates.txt
Just make sure that /tmp/f2.tmp
exists and is empty before you run this, for example through the following commands:
rm /tmp/f2.tmp
touch /tmp/f2.tmp
Hope this helps =)
I wrote a script that renames your files to match a hash of their contents.
It uses a subset of the file's bytes so it's fast, and if there's a collision it appends a counter to the name like this:
3101ace8db9f.jpg
3101ace8db9f (1).jpg
3101ace8db9f (2).jpg
This makes it easy to review and delete duplicates on your own, without trusting somebody else's software with your photos more than you need to.
Script: https://gist.github.com/SimplGy/75bb4fd26a12d4f16da6df1c4e506562