Linux command or script counting duplicated lines in a text file?
Almost the same as borribles' but if you add the d
param to uniq
it only shows duplicates.
sort filename | uniq -cd | sort -nr
uniq -c file
and in case the file is not sorted already:
sort file | uniq -c
Send it through sort
(to put adjacent items together) then uniq -c
to give counts, i.e.:
sort filename | uniq -c
and to get that list in sorted order (by frequency) you can
sort filename | uniq -c | sort -nr