Is it safe to redirect stdout and stderr to the same file without file descriptor copies?

What happens when you do

some_command >>file 2>>file

is that file will be opened for appending twice. This is safe to do on a POSIX filesystem. Any write that happens to the file when it's opened for appending will occur at the end of the file, regardless of whether the data comes over the standard output stream or the standard error stream.

This relies on support for atomic append write operations in the underlying filesystem. Some filesystems, such as NFS, does not support atomic append. See e.g. the question "Is file append atomic in UNIX? " on StackOverflow.

Using

some_command >>file 2>&1

would work even on NFS though.

However, using

some_command >file 2>file

is not safe, as the shell will truncate the output file (twice) and any writing that happens on either stream will overwrite the data already written by the other stream.

Example:

$ { echo hello; echo abc >&2; } >file 2>file
$ cat file
abc
o

The hello string is written first (with a terminating newline), and then the string abc followed by a newline is written from standard error, overwriting the hell. The result is the string abc with a newline, followed by what's left of the first echo output, an o and a newline.

Swapping the two echo around wound produce only hello in the output file as that string is written last and is longer than the abc string. The order in which the redirections occur does not matter.

It would be better and safer to use the more idiomatic

some_command >file 2>&1

No, it's not just as safe as the standard >>bar 2>&1.

When you're writing

foo >>bar 2>>bar

you're opening the bar file twice with O_APPEND, creating two completely independent file objects[1], each with its own state (pointer, open modes, etc).

This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object.

Now, there's a problem with that:

O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.

You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason.

[1] "Open File Descriptions" in POSIX lingo.