CP: max source files number arguments for copy utility
That depends greatly on the system and version, on the number and size of the arguments and on the number and size of environment variable names.
Traditionally on Unix, the limit (as reported by getconf ARG_MAX
) was more or less on the cumulative size of:
- The length of the argument strings (including the terminating
'\0'
) - The length of the array of pointers to those strings, so typically 8 bytes per argument on a 64bit system
- The length of the environment strings (including the terminating
'\0'
), an environment string being by convention something likevar=value
. - The length of the array of pointers to those strings, so typically 8 bytes per argument on a 64bit system
Bearing in mind that cp
also counts as an argument (is the first argument).
On Linux, it depends on the version. The behaviour there changed recently where it's not longer a fixed space.
Checking on Linux 3.11, getconf ARG_MAX
now reports a quarter of the limit set on the stack size, or 128kiB if that's less than 512kiB).
(zsh
syntax below):
$ limit stacksize
stacksize 8MB
$ getconf ARG_MAX
2097152
$ limit stacksize 4M
$ getconf ARG_MAX
1048576
That limit is on the cumulative size of the argument and environment strings and some overhead (I suspect due to alignment consideration on page boundaries). The size of the pointers is not taken into account.
Searching for the limit, I get:
$ /bin/true {1..164686}
$ /bin/true {1..164687}
zsh: argument list too long: /bin/true
$ x= /bin/true {1..164686}
$ x=1 /bin/true {1..164686}
zsh: argument list too long: /bin/true
The maximum cumulative size before breaking in that case is:
$ (env _=/bin/true x=;print -l /bin/true {1..164686}) | wc -c
1044462
Now, that does not mean that you can pass 1 million empty arguments. On a 64 bit system, 1 million empty arguments make a pointer list of 8MB, which would be above my stack size of 4MiB.
$ IFS=:; /bin/true ${=${(l.1000000..:.)${:-}}}
zsh: killed /bin/true ${=${(l.1000000..:.)${:-}}}
(you'll noticed it's not a E2BIG error. I'm not sure at which point the process gets killed there though if it's within the execve
system call or later).
Also note (still on Linux 3.11) that the maximum size of a single argument or environment string is 128kiB, regardless of the size the stack.
$ /bin/true ${(l.131071..a.)${:-}} # 131072 OK
$ /bin/true ${(l.131072..a.)${:-}} # 131073 not
zsh: argument list too long: /bin/true
$ /bin/true ${(l.131071..a.)${:-}} ${(l.131071..a.)${:-}} # 2x 131072 OK
That will depend on the value of ARG_MAX which can change between systems. To find out the value for your system run (showing the result on mine as an example):
$ getconf ARG_MAX
2097152
This has nothing to do with cp
or your shell, it is a limit imposed by the kernel, it will not execute (exec()
) commands if their arguments are longer than ARG_MAX
. So, if the length of the argument list you have given to cp
is greater than ARG_MAX, the cp
command will not run at all.
To answer your main question then, cp
will process no files since it will never be executed with so many arguments. I should also mention that this does not depend on the number of arguments but on their length. You could conceivably have the same issue with very few but very long file names.
The way of getting around these errors is to run your command in a loop:
for file in /src/*; do cp "$file" /dst/; done