Time waste of execv() and fork()
Not any longer. There's something called COW
(Copy On Write), only when one of the two processes (Parent/Child) tries to write to a shared data, it is copied.
In the past:
The fork()
system call copied the address space of the calling process (the parent) to create a new process (the child).
The copying of the parent's address space into the child was the most expensive part of the fork()
operation.
Now:
A call to fork()
is frequently followed almost immediately by a call to exec()
in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling exec()
.
For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to share the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as copy-on-write. To do this, on fork()
the kernel would copy the address space mappings from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a fork()
followed by an exec()
in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls exec()
.
What is the advantage that is achieved by using this combo (instead of some other solution) that makes people still use this even though we have waste?
You have to create a new process somehow. There are very few ways for a userspace program to accomplish that. POSIX used to have vfork()
alognside fork()
, and some systems may have their own mechanisms, such as Linux-specific clone()
, but since 2008, POSIX specifies only fork()
and the posix_spawn()
family. The fork
+ exec
route is more traditional, is well understood, and has few drawbacks (see below). The posix_spawn
family is designed as a special purpose substitute for use in contexts that present difficulties for fork()
; you can find details in the "Rationale" section of its specification.
This excerpt from the Linux man page for vfork()
may be illuminating:
Under Linux,
fork
(2) is implemented using copy-on-write pages, so the only penalty incurred byfork
(2) is the time and memory required to duplicate the parent’s page tables, and to create a unique task structure for the child. However, in the bad old days afork
(2) would require making a complete copy of the caller’s data space, often needlessly, since usually immediately afterwards anexec
(3) is done. Thus, for greater efficiency, BSD introduced thevfork
() system call, which did not fully copy the address space of the parent process, but borrowed the parent’s memory and thread of control until a call toexecve
(2) or an exit occurred. The parent process was suspended while the child was using its resources. The use ofvfork
() was tricky: for example, not modifying data in the parent process depended on knowing which variables are held in a register.
(Emphasis added)
Thus, your concern about waste is not well-founded for modern systems (not limited to Linux), but it was indeed an issue historically, and there were indeed mechanisms designed to avoid it. These days, most of those mechanisms are obsolete.
Another answer states:
However, in the bad old days a fork(2) would require making a complete copy of the caller’s data space, often needlessly, since usually immediately afterwards an exec(3) is done.
Obviously, one person's bad old days are a lot younger than others remember.
The original UNIX systems did not have the memory for running multiple processes and they did not have an MMU for keeping several processes in physical memory ready-to-run at the same logical address space: they swapped out processes to disk that it wasn't currently running.
The fork system call was almost entirely the same as swapping out the current process to disk, except for the return value and for not replacing the remaining in-memory copy by swapping in another process. Since you had to swap out the parent process anyway in order to run the child, fork+exec was not incurring any overhead.
It's true that there was a period of time when fork+exec was awkward: when there were MMUs that provided a mapping between logical and physical address space but page faults did not retain enough information that copy-on-write and a number of other virtual-memory/demand-paging schemes were feasible.
This situation was painful enough, not just for UNIX, that page fault handling of the hardware was adapted to become "replayable" pretty fast.