Tor on FreeBSD Performance issues

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Tor on FreeBSD Performance issues

Julian Wissmann
Hi,

I'm an admin for a non-profit that runs Tor exit nodes, most of them on Linux currently, but due to problems on our high bandwidth nodes, we decided to migrate one of them to FreeBSD to do some testing.
I've been using FreeBSD for quite some years now, longer than Linux, so I figured this would probably be a no-brainer, but turns out, that it isn't.

On FreeBSD I currently manage to push 150-200Mbits with some heavy tuning applied already, on Linux it is roughly 500Mbits.

Therefor I'm wondering if I'm really running into some limitations here or if I'm actually doing something wrong.

ifconfig bge0
bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=c01db<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,POLLING,VLAN_HWCSUM,TSO4,VLAN_HWTSO,LINKSTATE>

As you can see I've compiled polling and currently I'm running on kern.hz=16000 as that's given me the best performance, so far.

This is my netstat output for tcp on ipv4:

netstat -s
tcp:
        265086752 packets sent
                84255022 data packets (155791599035 bytes)
                2244601 data packets (2698410010 bytes) retransmitted
                73011 data packets unnecessarily retransmitted
                684 resends initiated by MTU discovery
                153151729 ack-only packets (0 delayed)
                0 URG only packets
                19864 window probe packets
                16854982 window update packets
                8551532 control packets
        236720260 packets received
                86600847 acks (for 155842386836 bytes)
                5062568 duplicate acks
                76267 acks for unsent data
                138258588 packets (150041170335 bytes) received in-sequence
                1502804 completely duplicate packets (562243206 bytes)
                44193 old duplicate packets
                91821 packets with some dup. data (23693391 bytes duped)
                6598536 out-of-order packets (7168950882 bytes)
                32074 packets (20848106 bytes) of data after window
                7806 window probes
                1457705 window update packets
                159860 packets received after close
                5219 discarded for bad checksums
                3 discarded for bad header offset fields
                0 discarded because packet too short
                1468 discarded due to memory problems
        5665849 connection requests
        694088 connection accepts
        0 bad connection attempts
        129 listen queue overflows
        9308 ignored RSTs in the windows
        3289250 connections established (including accepts)
        6334698 connections closed (including 449721 drops)
                916420 connections updated cached RTT on close
                923786 connections updated cached RTT variance on close
                273103 connections updated cached ssthresh on close
        354989 embryonic connections dropped
        81015541 segments updated rtt (of 56772442 attempts)
        9127304 retransmit timeouts
                19875 connections dropped by rexmit timeout
        21274 persist timeouts
                215 connections dropped by persist timeout
        4541 Connections (fin_wait_2) dropped because of timeout
        10657 keepalive timeouts
                0 keepalive probes sent
                10657 connections dropped by keepalive
        39113689 correct ACK header predictions
        121244352 correct data packet header predictions
        698461 syncache entries added
                21576 retransmitted
                14546 dupsyn
                0 dropped
                694088 completed
                0 bucket overflow
                0 cache overflow
                1046 reset
                3338 stale
                135 aborted
                0 badack
                6 unreach
                0 zone failures
        698461 cookies sent
        232 cookies received
        173007 hostcache entries added
                0 bucket overflow
        285122 SACK recovery episodes
        584154 segment rexmits in SACK recovery episodes
        730380132 byte rexmits in SACK recovery episodes
        3053612 SACK options (SACK blocks) received
        6689960 SACK options (SACK blocks) sent
        0 SACK scoreboard overflow
        8236 packets with ECN CE bit set
        26367032 packets with ECN ECT(0) bit set
        56 packets with ECN ECT(1) bit set
        312220 successful ECN handshakes
        34178 times ECN reduced the congestion window

My sysctls are roughly equivalent to these: http://serverfault.com/questions/64356/freebsd-performance-tuning-sysctls-loader-conf-kernel

Any hints?
Do I need to provide more info?

Julian

signature.asc (858 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Tor on FreeBSD Performance issues

K. Macy
On Mon, Feb 6, 2012 at 9:38 PM, Julian Wissmann
<[hidden email]> wrote:
> Hi,
>
> I'm an admin for a non-profit that runs Tor exit nodes, most of them on Linux currently, but due to problems on our high bandwidth nodes, we decided to migrate one of them to FreeBSD to do some testing.
> I've been using FreeBSD for quite some years now, longer than Linux, so I figured this would probably be a no-brainer, but turns out, that it isn't.
>
> On FreeBSD I currently manage to push 150-200Mbits with some heavy tuning applied already, on Linux it is roughly 500Mbits.

Tor tends to keep open a lot of connections but that is really very
little bandwidth and you really shouldn't need to have polling on or
have hz set that high. What does CPU utilization look like on these
systems? I don't know if it is part of the problem, but TSO really
isn't very useful with large numbers of connections, it is better
suited to helping a single connection saturate an interface - could
please you turn that off.


Thanks

> Therefor I'm wondering if I'm really running into some limitations here or if I'm actually doing something wrong.
>
> ifconfig bge0
> bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>        options=c01db<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,POLLING,VLAN_HWCSUM,TSO4,VLAN_HWTSO,LINKSTATE>
>
> As you can see I've compiled polling and currently I'm running on kern.hz=16000 as that's given me the best performance, so far.
>
> This is my netstat output for tcp on ipv4:
>
> netstat -s
> tcp:
>        265086752 packets sent
>                84255022 data packets (155791599035 bytes)
>                2244601 data packets (2698410010 bytes) retransmitted
>                73011 data packets unnecessarily retransmitted
>                684 resends initiated by MTU discovery
>                153151729 ack-only packets (0 delayed)
>                0 URG only packets
>                19864 window probe packets
>                16854982 window update packets
>                8551532 control packets
>        236720260 packets received
>                86600847 acks (for 155842386836 bytes)
>                5062568 duplicate acks
>                76267 acks for unsent data
>                138258588 packets (150041170335 bytes) received in-sequence
>                1502804 completely duplicate packets (562243206 bytes)
>                44193 old duplicate packets
>                91821 packets with some dup. data (23693391 bytes duped)
>                6598536 out-of-order packets (7168950882 bytes)
>                32074 packets (20848106 bytes) of data after window
>                7806 window probes
>                1457705 window update packets
>                159860 packets received after close
>                5219 discarded for bad checksums
>                3 discarded for bad header offset fields
>                0 discarded because packet too short
>                1468 discarded due to memory problems
>        5665849 connection requests
>        694088 connection accepts
>        0 bad connection attempts
>        129 listen queue overflows
>        9308 ignored RSTs in the windows
>        3289250 connections established (including accepts)
>        6334698 connections closed (including 449721 drops)
>                916420 connections updated cached RTT on close
>                923786 connections updated cached RTT variance on close
>                273103 connections updated cached ssthresh on close
>        354989 embryonic connections dropped
>        81015541 segments updated rtt (of 56772442 attempts)
>        9127304 retransmit timeouts
>                19875 connections dropped by rexmit timeout
>        21274 persist timeouts
>                215 connections dropped by persist timeout
>        4541 Connections (fin_wait_2) dropped because of timeout
>        10657 keepalive timeouts
>                0 keepalive probes sent
>                10657 connections dropped by keepalive
>        39113689 correct ACK header predictions
>        121244352 correct data packet header predictions
>        698461 syncache entries added
>                21576 retransmitted
>                14546 dupsyn
>                0 dropped
>                694088 completed
>                0 bucket overflow
>                0 cache overflow
>                1046 reset
>                3338 stale
>                135 aborted
>                0 badack
>                6 unreach
>                0 zone failures
>        698461 cookies sent
>        232 cookies received
>        173007 hostcache entries added
>                0 bucket overflow
>        285122 SACK recovery episodes
>        584154 segment rexmits in SACK recovery episodes
>        730380132 byte rexmits in SACK recovery episodes
>        3053612 SACK options (SACK blocks) received
>        6689960 SACK options (SACK blocks) sent
>        0 SACK scoreboard overflow
>        8236 packets with ECN CE bit set
>        26367032 packets with ECN ECT(0) bit set
>        56 packets with ECN ECT(1) bit set
>        312220 successful ECN handshakes
>        34178 times ECN reduced the congestion window
>
> My sysctls are roughly equivalent to these: http://serverfault.com/questions/64356/freebsd-performance-tuning-sysctls-loader-conf-kernel
>
> Any hints?
> Do I need to provide more info?
>
> Julian



--
   “The real damage is done by those millions who want to 'get by.'
The ordinary men who just want to be left in peace. Those who don’t
want their little lives disturbed by anything bigger than themselves.
Those with no sides and no causes. Those who won’t take measure of
their own strength, for fear of antagonizing their own weakness. Those
who don’t like to make waves—or enemies.

   Those for whom freedom, honour, truth, and principles are only
literature. Those who live small, love small, die small. It’s the
reductionist approach to life: if you keep it small, you’ll keep it
under control. If you don’t make any noise, the bogeyman won’t find
you.

   But it’s all an illusion, because they die too, those people who
roll up their spirits into tiny little balls so as to be safe. Safe?!
>From what? Life is always on the edge of death; narrow streets lead to
the same place as wide avenues, and a little candle burns itself out
just like a flaming torch does.

   I choose my own way to burn.”

   Sophie Scholl
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Tor on FreeBSD Performance issues

Adrian Chadd-2
Can you verify that it's properly using kqueue, rather than poll?


Adrian
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Tor on FreeBSD Performance issues

Julian Wissmann
 1086 tor      RET   read 1568/0x620
  1086 tor      CALL  clock_gettime(0xd,0x7fffffffd910)
  1086 tor      RET   clock_gettime 0
  1086 tor      CALL  kevent(0x3,0x3e18000,0x2,0x3dbe000,0x400,0x7fffffffd920)
  1086 tor      GIO   fd 3 wrote 64 bytes

1086 tor      RET   kevent 3
  1086 tor      CALL  clock_gettime(0x4,0x7fffffffd940)
  1086 tor      RET   clock_gettime 0
  1086 tor      CALL  gettimeofday(0x7fffffffd930,0)
  1086 tor      RET   gettimeofday 0
  1086 tor      CALL  recvfrom(0xbc2,0x4822020,0x3e40,0,0,0)
  1086 tor      GIO   fd 3010 read 4096 bytes

As I understand kevent to be part of queue I go with yes here. What I also take from it is a huge number of clock_gettime() and gettimeofday() calls.

Julian

Am 08.02.2012 um 17:39 schrieb Adrian Chadd:

> Can you verify that it's properly using kqueue, rather than poll?
>
>
> Adrian


signature.asc (858 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Tor on FreeBSD Performance issues

Alexander Leidinger
Quoting Julian Wissmann <[hidden email]> (from Wed, 8 Feb  
2012 18:33:53 +0100):

> 1086 tor      RET   read 1568/0x620
>   1086 tor      CALL  clock_gettime(0xd,0x7fffffffd910)
>   1086 tor      RET   clock_gettime 0
>   1086 tor      CALL  
> kevent(0x3,0x3e18000,0x2,0x3dbe000,0x400,0x7fffffffd920)
>   1086 tor      GIO   fd 3 wrote 64 bytes
>
> 1086 tor      RET   kevent 3
>   1086 tor      CALL  clock_gettime(0x4,0x7fffffffd940)
>   1086 tor      RET   clock_gettime 0
>   1086 tor      CALL  gettimeofday(0x7fffffffd930,0)
>   1086 tor      RET   gettimeofday 0
>   1086 tor      CALL  recvfrom(0xbc2,0x4822020,0x3e40,0,0,0)
>   1086 tor      GIO   fd 3010 read 4096 bytes
>
> As I understand kevent to be part of queue I go with yes here. What  
> I also take from it is a huge number of clock_gettime() and  
> gettimeofday() calls.

And there you have probably the cause of the slowdown. The clock  
subsystem in FreeBSD is working at a higher prcession than what you  
get in FreeBSD. If possible, try to trim down the number of calls, or  
use a less precise (and faster) clocksource, e.g. one of the _FAST  
ones (see http://www.freebsd.org/cgi/man.cgi?clock_gettime).

Bye,
Alexander.

--
Corruption is not the No. 1 priority of the Police Commissioner.
His job is to enforce the law and fight crime.
                -- P.B.A. President E. J. Kiernan

http://www.Leidinger.net    Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org       netchild @ FreeBSD.org  : PGP ID = 72077137

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Tor on FreeBSD Performance issues

Julian Wissmann
In reply to this post by Julian Wissmann

Hi
>
> On 11 Feb 2012, at 00:06, Steven Murdoch wrote:
>
>> On 10 Feb 2012, at 22:22, Robert N. M. Watson wrote:
>>> I wonder if we're looking at some sort of different in socket buffer tuning between Linux and FreeBSD that is leading to better link utilisation under this workload. Both FreeBSD and Linux auto-tune socket buffer sizes, but I'm not sure if their policies for enabling/etc auto-tuning differ. Do we know if Tor fixes socket buffer sizes in such a way that it might lead to FreeBSD disabling auto-tuning?
>>
>> If ConstrainedSockets is set to 1 (it defaults to 0), then Tor will "setsockopt(sock, SOL_SOCKET, SO_SNDBUF"  and "setsockopt(sock, SOL_SOCKET, SO_RCVBUF" to ConstrainedSockSize (defaults 8192). Otherwise I don't see any fiddling with buffer size. So I'd first confirm that ConstrainedSockets is set to zero, and perhaps try experimenting with it on for different values of ConstrainedSockSize.
> In FreeBSD, I believe the current policy is that any TCP socket that doesn't have a socket option specifically set will be auto-tuning. So it's likely that, as long as ConstrainedSockSize isn't set, auto-tuning is enabled.

This is set to zero in Tor.
>
>>> I'm a bit surprised by the out-of-order packet count -- is that typical of a Tor workload, and can we compare similar statistics on other nodes there? This could also be a symptom of TCP reassembly queue issues. Lawrence: did we get the fixes in place there to do with the bounded reassembly queue length, and/or are there any workarounds for that issue? Is it easy to tell if we're hitting it in practice?
>>
>> I can't think of any inherent reason for excessive out-of-order packets, as the host TCP stack is used by all Tor nodes currently. It could be some network connections from users are bad (we have plenty of dial-up users).
>
> I guess what I'm wondering about is relative percentages. Out-of-order packets can also arise as a result of network stack bugs, and might explain a lower aggregate bandwidth. The netstat -Q options I saw in the forwarded e-mail suggest that the scenarios that could lead to this aren't present, but since it stands out, it would be worth trying to explain just to convince ourselves it's not a stack bug.
As we have two boxes with identical configuration in the same datacenter here I can give some Linux Output, too:
# netstat -s
Ip:
    1099780169 total packets received
    0 forwarded
    0 incoming packets discarded
    2062308427 incoming packets delivered
    2800933295 requests sent out
    694 outgoing packets dropped
    798042 fragments dropped after timeout
    143378847 reassemblies required
    45697700 packets reassembled ok
    18522117 packet reassembles failed
    1070 fragments received ok
    761 fragments failed
    28174 fragments created
Icmp:
    92792968 ICMP messages received
    18458681 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 73204262
        timeout in transit: 6996342
        source quenches: 813143
        redirects: 9100882
        echo requests: 1646656
        echo replies: 5
    2005869 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 359208
        echo request: 5
        echo replies: 1646656
IcmpMsg:
        InType0: 5
        InType3: 73204262
        InType4: 813143
        InType5: 9100882
        InType8: 1646656
        InType11: 6996342
        OutType0: 1646656
        OutType3: 359208
        OutType8: 5
Tcp:
    4134119965 active connections openings
    275823710 passive connection openings
    2002550589 failed connection attempts
    199749970 connection resets received
    31931 connections established
    1839369825 segments received
    3631158795 segments send out
    3353305069 segments retransmited
    2152248 bad segments received.
    237858281 resets sent
Udp:
    129942286 packets received
    203329 packets to unknown port received.
    0 packet receive errors
    109523321 packets sent
UdpLite:
TcpExt:
    7088 SYN cookies sent
    15275 SYN cookies received
    3196797 invalid SYN cookies received
    1093456 resets received for embryonic SYN_RECV sockets
    36073572 packets pruned from receive queue because of socket buffer overrun
    77060 packets pruned from receive queue
    232 packets dropped from out-of-order queue because of socket buffer overrun
    362884 ICMP packets dropped because they were out-of-window
    85 ICMP packets dropped because socket was locked
    673831896 TCP sockets finished time wait in fast timer
    48600 time wait sockets recycled by time stamp
    2013223394 delayed acks sent
    3477567 delayed acks further delayed because of locked socket
    Quick ack mode was activated 440274027 times
    35711291 times the listen queue of a socket overflowed
    35711291 SYNs to LISTEN sockets dropped
    457 packets directly queued to recvmsg prequeue.
    1460 bytes directly in process context from backlog
    48211 bytes directly received in process context from prequeue
    1494466591 packet headers predicted
    33 packets header predicted and directly queued to user
    4257229715 acknowledgments not containing data payload received
    740819251 predicted acknowledgments
    442309 times recovered from packet loss due to fast retransmit
    197193098 times recovered from packet loss by selective acknowledgements
    494378 bad SACK blocks received
    Detected reordering 221053 times using FACK
    Detected reordering 1053064 times using SACK
    Detected reordering 72059 times using reno fast retransmit
    Detected reordering 4265 times using time stamp
    336672 congestion windows fully recovered without slow start
    356482 congestion windows partially recovered using Hoe heuristic
    41059770 congestion windows recovered without slow start by DSACK
    54306977 congestion windows recovered without slow start after partial ack
    245685510 TCP data loss events
    TCPLostRetransmit: 7881258
    421631 timeouts after reno fast retransmit
    70726251 timeouts after SACK recovery
    26797894 timeouts in loss state
    349218987 fast retransmits
    19632788 forward retransmits
    224201891 retransmits in slow start
    2441482671 other TCP timeouts
    220051 classic Reno fast retransmits failed
    22663942 SACK retransmits failed
    160105897 packets collapsed in receive queue due to low socket buffer
    568326755 DSACKs sent for old packets
    12316261 DSACKs sent for out of order packets
    157800118 DSACKs received
    1008695 DSACKs for out of order packets received
    2043 connections reset due to unexpected SYN
    48512275 connections reset due to unexpected data
    15085625 connections reset due to early user close
    1702109944 connections aborted due to timeout
    TCPSACKDiscard: 231850
    TCPDSACKIgnoredOld: 99417376
    TCPDSACKIgnoredNoUndo: 33053947
    TCPSpuriousRTOs: 5163955
    TCPMD5Unexpected: 8
    TCPSackShifted: 290984575
    TCPSackMerged: 613203726
    TCPSackShiftFallback: 747049207
IpExt:
    InBcastPkts: 12617896
    OutBcastPkts: 1456356
    InOctets: -1096131435
    OutOctets: -1263483369
    InBcastOctets: -2144923256
    OutBcastOctets: 187483424
>
>>> On the other hand, I think Steven had mentioned that Tor has changed how it does exit node load distribution to better take into account realised rather than advertised bandwidth. If that's the case, you might get larger systemic effects causing feedback: if you offer slightly less throughput then you get proportionally less traffic. This is something I can ask Steven about on Monday.
>>
>> There is active probing of capacity, which then is used to adjust the weighting factors that clients use.
>
> So there is a chance that the effect we're seeing has to do with clients not being directed to the host, perhaps due to larger systemic issues, or the FreeBSD box responding less well to probing and therefore being assigned less work by Tor as a whole. Are there any tools for diagnosing these sorts of interactions in Tor, or fixing elements of the algorithm to allow experiments with capacity to be done more easily? We can treat this as a FreeBSD stack problem in isolation, but in as much as we can control for effects like that, it would be useful.
>
> There's a non-trivial possibility that we're simply missing a workaround for known-bad Broadcom hardware, as well, so it would be worth our taking a glance at the pciconf -lv output describing the card so we can compare Linux driver workarounds with FreeBSD driver workarounds, and make sure we have them all. If I recall correctly, that silicon is not known for its correctness, so failing to disable some hardware feature could have significant effect.

#pciconf -lv
bge0@pci0:32:0:0: class=0x020000 card=0x705d103c chip=0x165b14e4 rev=0x10 hdr=0x00
    vendor     = 'Broadcom Corporation'
    device     = 'NetXtreme BCM5723 Gigabit Ethernet PCIe'
    class      = network
    subclass   = ethernet
bge1@pci0:34:0:0: class=0x020000 card=0x705d103c chip=0x165b14e4 rev=0x10 hdr=0x00
    vendor     = 'Broadcom Corporation'
    device     = 'NetXtreme BCM5723 Gigabit Ethernet PCIe'
    class      = network
    subclass   = ethernet
>
>>> Could someone remind me if Tor is multi-threaded these days, and if so, how socket I/O is distributed over threads?
>>
>> I believe that Tor is single-threaded for the purposes of I/O. Some server operators with fat pipes have had good experiences of running several Tor instances in parallel on different ports to increase bandwidth utilisation.
>
> It would be good to confirm the configuration in this particular case to make sure we understand it. It would also be good to know if the main I/O thread in Tor is saturating the core it's running on -- if so, we might be looking at some poor behaviour relating to, for example, frequent timestamp checking, which is currently more expensive on FreeBSD than Linux.
We have two Tor processes running. It still only uses multi-threading for crypto work, but not even for all of that (only Onionskins). On polling I actually got both Tor Processes to nearly saturate the cores they were on, but now that I disabled polling and went back to 1000HZ I don't get there. Currently one process is at 60% WCPU, the other one being at about 50%.

As It's been asked: Yes, it is a FreeBSD 9 Box and no, there is no net.inet.tcp.inflight.enable.
Also libevent is using kqueue and I've tried patching both Tor and libevent to use CLOCK_MONOTONIC_FAST and CLOCK_REALTIME_FAST, as has been pointed out by Alexander.

If by flow cache you mean net.inet.flowtable, then I believe that the sysctl won't show up unless I activate IP Forwarding, which I have not (and I don't have the net.inet.flowtable available).

Also some sysctls as requested:
kern.ipc.somaxconn=16384
kern.ipc.maxsockets=204800
kern.maxfiles=204800
kern.maxfilesperproc=200000
kern.maxvnodes=200000
net.inet.tcp.recvbuf_max=10485760
net.inet.tcp.recvbuf_inc=65535
net.inet.tcp.sendbuf_max=10485760
net.inet.tcp.sendbuf_inc=65535
net.inet.tcp.sendspace=10485760
net.inet.tcp.recvspace=10485760
net.inet.tcp.delayed_ack=0
net.inet.ip.portrange.first=1024
net.inet.ip.portrange.last=65535
net.inet.ip.rtexpire=2
net.inet.ip.rtminexpire=2
net.inet.ip.rtmaxcache=1024
net.inet.tcp.rfc1323=0
net.inet.tcp.maxtcptw=200000
net.inet.ip.intr_queue_maxlen=4096
net.inet.tcp.ecn.enable=1    (net.inet.ip.intr_queue_drops is zero)
net.inet.ip.portrange.reservedlow=0
net.inet.ip.portrange.reservedhigh=0
net.inet.ip.portrange.hifirst=1024
security.mac.portacl.enabled=1
security.mac.portacl.suser_exempt=1
security.mac.portacl.port_high=1023
security.mac.portacl.rules=uid:80:tcp:80
security.mac.portacl.rules=uid:256:tcp:443

Thanks for the replies and all of this information.

Julian_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Tor on FreeBSD Performance issues

Marcus Reid
On Sun, Feb 12, 2012 at 03:58:54PM +0100, Julian Wissmann wrote:
> Also libevent is using kqueue and I've tried patching both Tor and
> libevent to use CLOCK_MONOTONIC_FAST and CLOCK_REALTIME_FAST, as has
> been pointed out by Alexander.

Just a reminder of rwatson's ldpreload trick to speed up gettimeofday():

  http://www.watson.org/~robert/freebsd/clock/

Marcus
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"