How to split CSV files as per number of rows specified?

Use the Linux split command:

split -l 20 file.txt new    

Split the file "file.txt" into files beginning with the name "new" each containing 20 lines of text each.

Type man split at the Unix prompt for more information. However you will have to first remove the header from file.txt (using the tail command, for example) and then add it back on to each of the split files.


Made it into a function. You can now call splitCsv <Filename> [chunkSize]

splitCsv() {
    HEADER=$(head -1 $1)
    if [ -n "$2" ]; then
        CHUNK=$2
    else 
        CHUNK=1000
    fi
    tail -n +2 $1 | split -l $CHUNK - $1_split_
    for i in $1_split_*; do
        sed -i -e "1i$HEADER" "$i"
    done
}

Found on: http://edmondscommerce.github.io/linux/linux-split-file-eg-csv-and-keep-header-row.html


One-liner which preserves the header row in each split file. This example gives you 999 lines of data and one header row per file.

cat bigFile.csv | parallel --header : --pipe -N999 'cat >file_{#}.csv'

https://stackoverflow.com/a/53062251/401226 where the answer has comments about installing the correct version of parallel (in ubuntu use the specific parallel package, which is more recent than what is bundled in moreutils)


This should work !!!

file_name = Name of the file you want to split.
10000 = Number of rows each split file would contain
file_part_ = Prefix of split file name (file_part_0,file_part_1,file_part_2..etc goes on)

split -d -l 10000 file_name.csv file_part_