How do I count the number of rows and columns in a file using bash?
Columns: awk '{print NF}' file | sort -nu | tail -n 1
Use head -n 1
for lowest column count, tail -n 1
for highest column count.
Rows: cat file | wc -l
or wc -l < file
for the UUOC crowd.
If your file is big but you are certain that the number of columns remains the same for each row (and you have no heading) use:
head -n 1 FILE | awk '{print NF}'
to find the number of columns, where FILE is your file name.
To find the number of lines 'wc -l FILE' will work.
Alternatively to count columns, count the separators between columns. I find this to be a good balance of brevity and ease to remember. Of course, this won't work if your data include the column separator.
head -n1 myfile.txt | grep -o " " | wc -l
Uses head -n1
to grab the first line of the file.
Uses grep -o
to to count all the spaces, and output each space found on a new line. Uses wc -l
to count the number of lines.
EDIT: As Gaurav Tuli points out below, I forgot to mention you have to mentally add 1 to the result, or otherwise script this math.
Little twist to kirill_igum's answer, and you can easily count the number of columns of any certain row you want, which was why I've come to this question, even though the question is asking for the whole file. (Though if your file has same columns in each line this also still works of course):
head -2 file |tail -1 |tr '\t' '\n' |wc -l
Gives the number of columns of row 2. Replace 2 with 55 for example to get it for row 55.
-bash-4.2$ cat file
1 2 3
1 2 3 4
1 2
1 2 3 4 5
-bash-4.2$ head -1 file |tail -1 |tr '\t' '\n' |wc -l
3
-bash-4.2$ head -4 file |tail -1 |tr '\t' '\n' |wc -l
5
Code above works if your file is separated by tabs, as we define it to "tr". If your file has another separator, say commas, you can still count your "columns" using the same trick by simply changing the separator character "t" to ",":
-bash-4.2$ cat csvfile
1,2,3,4
1,2
1,2,3,4,5
-bash-4.2$ head -2 csvfile |tail -1 |tr '\,' '\n' |wc -l
2