|
|
Subscribe / Log in / New account

Ghosts of Unix past, part 2: Conflated designs

Ghosts of Unix past, part 2: Conflated designs

Posted Jan 7, 2011 0:32 UTC (Fri) by bronson (subscriber, #4806)
In reply to: Ghosts of Unix past, part 2: Conflated designs by lwn555
Parent article: Ghosts of Unix past, part 2: Conflated designs

From the paper:

> Even though fork() has been improved over the years to use the COW (copy-on-write) semantics

If the years the author is referring to is the 70s, then sure! Otherwise, the paper appears to be little more than an indictment of a poor implementation of fork.


to post comments

Ghosts of Unix past, part 2: Conflated designs

Posted Jan 7, 2011 23:53 UTC (Fri) by lwn555 (guest, #72175) [Link] (2 responses)

I would not know when COW fork was implemented in various kernels.
Presumably not long after MMU hardware became available.

Still, a 1GB process needs 244,140 * 4KB page entries to be copied for the child. That's a lot of baggage if the child's sole purpose is to call exec(). Better to use vfork/exec when possible.

I'd like to be clear that the over commit issues with fork() are not an implementation problem but are a fundamental consequence of what fork does.

If the parent has a working data set of 100MB, and the child only needs 5MB from the parent, fork() still marks the remaining 95MB as needed by the child.

Assume the parent modifies it's entire 100MB working set while the child continues running with it's 5MB working set, then eventually both processes will consume 200MB instead of the 105MB which is technically needed.

So, regardless of the fork implementation, 95MB out of 200MB is wasted. As the parent spawns more children over time, the % wasted only gets worse.

Of course there are workarounds, but they come at the expense of forgoing the semantics which make fork appealing in the first place: inheriting context and data structures from the parent without IPC.

Ghosts of Unix past, part 2: Conflated designs

Posted Jan 8, 2011 0:12 UTC (Sat) by dlang (guest, #313) [Link] (1 responses)

if the child really only needs the 5MB, it can free the rest of the allocations and you are back to the 105MB total.

if the programmer isn't sure if the child needs 5MB of data of the entire 100MB of data then they would need to keep everything around in any case.

the worst-case of COW is that you use as much memory as you would without it. In practice this has been shown empirically to be a very large savings. some people are paranoid about this and turn off overcommit so that even in this worst case they would have the memory, but even they benefit from the increased speed, and from the fact that almost all the time the memory isn't needed.

so I disagree with your conclusion that there is so much memory wasted.

Ghosts of Unix past, part 2: Conflated designs

Posted Jan 8, 2011 9:12 UTC (Sat) by lwn555 (guest, #72175) [Link]

"if the child really only needs the 5MB, it can free the rest of the allocations and you are back to the 105MB total."

Easily said. While it's technically possible to free all unused memory pages after a fork, it's unusual to actually do this. The piece of code calling fork() may not really be aware or related to the memory allocated by the rest of the process.

Consider how difficult it would be for one library to deallocate the structures of other libraries after performing a fork.

Even if we did track all objects to free after forking, malloc may or may not actually be able to free the pages back to the system, particularly with pages allocated linearly via sbrk() since objects needed by the child are likely to be near the end.

"the worst-case of COW is that you use as much memory as you would without it."

We can agree there are no reasons not to use copy on write to implement fork.

"so I disagree with your conclusion that there is so much memory wasted."

Then I think you misunderstood the example. No matter which way you cut it, so long as the child doesn't do anything to explicitly free unused pages, it is stuck with 95MB of unusable ram. If the parent updates it's entire working set, then the child will be the sole owner of the data. If the parent quits and the child is allowed to continue, then the useless 95MB is still there. And this is only for one child.

You may feel this is a contrived example, but I can think of many instances where it would be desirable for a large parent to branch work into child processes such that this is a problem.

Fork works great in academic examples and programs where the parent is small, doesn't touch it's data, or the children are short lived. But there are applications where the fork paradigm in and of itself leads to excessive memory consumption.


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds