Re: FreeBSD 7.0-RELEASE: Can I specify the maximum number of cores that kernel can recognize ?
Hattori, Shigehiro wrote:
> I try to measure Bind caching name server's multi threading performance on
> FreeBSD 7 , like below.
> # of cores query/second
> 1 xx
> 2 xx
> 4 xx
> 6 xx
> 8 xx
> My testing machine has 8 cores. ( quad core * 2 )
> I'd like to specify the maximum # of cores that kernel can recognize.
> Can I specify the maximum # of cores with boot parameters or something ?
You can disable the cores you do not want by turning off their lapic.
Add to /boot/loader.conf:
hint.lapic.0.disabled=1 turns off lapic 0, etc. Cross-reference against
> I've already done Bind multi threading performance test on Linux (
> CentOS5 )
> In case of CentOS5 , I specified maximum # of cores with grub.conf
> or 2 or 4 or ..."
> # cat /etc/grub.conf
> title CentOS (2.6.18-53.1.21.el5PAE)
> root (hd0,1)
> kernel /boot/vmlinuz-2.6.18-53.1.21.el5PAE ro root=LABEL=/ rhgb
> quiet maxcpus=6
> initrd /boot/initrd-2.6.18-53.1.21.el5PAE.img
> The following is the results I did on CentOS5 ( kernel:
> 2.6.18-53.1.21.el5 ).
> -- Bind 9.4.2 caching name server on CentOS5
> # of cores query/second CPU ( named )
> 1 3578 99.9
> 2 5070 196
> 4 6608 362
> 6 9042 527
> 8 10552 678
> - Bind's machine spec
> CPU: Intel Xeon E5346 2.3GH ( quad core * 2 )
> Memory: 4GB
> Bind 9.4.2
> Cache size: 1400MB
> Max recursive clients: 50000
> - Traffic generetor: queryperf
> Query list: all queries are uniq ( that means "No answer in the cache" )
Make sure you are resolving these queries against another local server.
If you're querying random servers in the internet then you're mostly
going to be benchmarking your uplink latency and the maximum query rate
will be limited by the number of broken servers you query that do not
respond but time out.
Those numbers are quite low even for Linux; on similar hardware I can
achieve 60000 qps on Linux and about 105000 qps on FreeBSD:
With 1gb ethernet I have to query from multiple clients to get that high
because of request latency (with 10gbe I can saturate from a single
client). I was not disabling cores but limiting the number of bind
threads, which should be approximately the same thing especially for 8