How to count occurrences of text in a file?
You can use cut
and uniq
tools:
cut -d ' ' -f1 test.txt | uniq -c
5 5.135.134.16
9 13.57.220.172
1 13.57.233.99
2 18.206.226.75
3 18.213.10.181
Explanation :
cut -d ' ' -f1
: extract first field (ip address)uniq -c
: report repeated lines and display the number of occurences
If you don't specifically require the given output format, then I would recommend the already posted cut
+ uniq
based answer
If you really need the given output format, a single-pass way to do it in Awk would be
awk '{c[$1]++} END{for(i in c) print i, "count: " c[i]}' log
This is somewhat non-ideal when the input is already sorted since it unnecessarily stores all the IPs into memory - a better, though more complicated, way to do it in the pre-sorted case (more directly equivalent to uniq -c
) would be:
awk '
NR==1 {last=$1}
$1 != last {print last, "count: " c[last]; last = $1}
{c[$1]++}
END {print last, "count: " c[last]}
'
Ex.
$ awk 'NR==1 {last=$1} $1 != last {print last, "count: " c[last]; last = $1} {c[$1]++} END{print last, "count: " c[last]}' log
5.135.134.16 count: 5
13.57.220.172 count: 9
13.57.233.99 count: 1
18.206.226.75 count: 2
18.213.10.181 count: 3
You can use grep
and uniq
for the list of addresses, loop over them and grep
again for the count:
for i in $(<log grep -o '^[^ ]*' | uniq); do
printf '%s count %d\n' "$i" $(<log grep -c "$i")
done
grep -o '^[^ ]*'
outputs every character from the beginning (^
) until the first space of each line, uniq
removes repeated lines, thus leaving you with a list of IP addresses. Thanks to command substitution, the for
loop loops over this list printing the currently processed IP followed by “ count ” and the count. The latter is computed by grep -c
, which counts the number of lines with at least one match.
Example run
$ for i in $(<log grep -o '^[^ ]*'|uniq);do printf '%s count %d\n' "$i" $(<log grep -c "$i");done
5.135.134.16 count 5
13.57.220.172 count 9
13.57.233.99 count 1
18.206.226.75 count 2
18.213.10.181 count 3