disk i/o unfairness with multiple processes

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

disk i/o unfairness with multiple processes

I've been seeing unfairness in disk i/o when multiple
processes compete for resources.  While some unfairness
can be tolerated in order to gain overall efficiency,
(e.g. avoiding long seeks) there is a limit.  I've seen
this with various scenarios, with 6.0, 6.2 and 7.0.
Here is a simple test case which demonstrates the problem,
and should be easy for others to duplicate.

2 GiB memory
7200 rpm SATA connected to nforce4-ultra
FreeBSD 7.0
FFS, soft-updates

$ time man de > /dev/null

real    0m0.013s
user    0m0.011s
sys     0m0.001s

$ cat 9_GB_file 9_GB_file 9_GB_file 9_GB_file > /dev/null &
[1] 84904
$ time man de > /dev/null
[1]+  Done   cat 9_GB_file 9_GB_file 9_GB_file 9_GB_file > /dev/null

real    9m20.508s
user    0m1.053s
sys     0m44.091s

systat -vmstat reports that cat is reading at 50-60 MB/s, which is
reasonable for this disk.

The 9_GB_file and /usr are both on the same disk.  Accessing
different disks is more likely to give the expected performance.
I suspect that some scenarios bottleneck in memory.

I certainly expect man to take longer if it is competing for
disk i/o, but 9 minutes seems a bit much.   The user and sys times
are also up significantly, which seems odd?
[hidden email] mailing list
To unsubscribe, send any mail to "[hidden email]"