bash how to remove duplicate lines from a text file code example
Example 1: bash remove duplicate lines from a file
# Basic syntax:
sort input_file | uniq --unique
# Sort the file first because uniq requires a sorted file to work. Uniq
# then eliminates all duplicated lines in the file, keeping one instance
# of each duplicated line
# Note, this doesn't return only non-duplicated lines. It returns
# unique instances of all lines, whether or not they are duplicated
# Note, if you want to return only one instance of all lines but see
# the number of repetitions for each line, run:
sort input_file | uniq -c
Example 2: removing duplicate input from a file in the command line
sort -u .txt