Re: Performance Tracker project update

classic Classic list List threaded Threaded
40 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Kris Kennaway-3
Erik Cederstrand wrote:

> Hi
>
> I'd like to send a small update on my progress on the Performance
> Tracker project.
>
> I now have a small setup of a server and a slave chugging along,
> currently collecting data. I'm following CURRENT and collecting results
> from super-smack and unixbench.
>
> The project still needs some work, but there's a temporary web interface
> to the data here: http://littlebit.dk:5000/plot/. Apart from the
> plotting it's possible to compare two dates and see the files that have
> changed. Error bars are 3*standard deviation, for the points with
> multiple measurements.
>
> Of interest is e.g. super-smack (select-key, 1 client) right when the
> GENERIC kernel was moved from the 4BSD to ULE scheduler on Oct. 19.
> Unixbench (arithmetic test, float) also has a significant jump on Oct. 3.
>
> There setup of the slave is documented roughly on the page but I'll be
> writing a full report and documentation over the next month.
>
> Comments are very welcome but please followup on performance@.

This is coming along very nicely indeed!

One suggestion I have is that as more metrics are added it becomes
important for an "at a glance" overview of changes so we can monitor for
performance improvements and regressions among many workloads.

One way to do this would be a matrix of each metric with its change
compared to recent samples.  e.g. you could do a student's T comparison
of today's numbers with those from yesterday, or from a week ago, and
colour-code those that show a significant deviation from "no change".
This might be a bit noisy on short timescales, so you could aggregrate
data into larger bins and compare e.g. moving 1-week aggregates.
Fluctuations on short timescales won't stand out, but if there is a real
change then it will show up less than a week later.

These significant events could also be graphed themselves and/or a
history log maintained (or automatically annotated on the individual
graphs) so historical changes can also be pinpointed.

At some point the ability to annotate the data will become important
(e.g. "We understand the cause of this, it was r1.123 of foo.c, which
was corrected in r1.124.  The developer responsible has been shot.")

Kris

P.S. If I understand correctly, the float test shows a regression?  The
metric is calculations/second, so higher = better?
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Erik Cederstrand
Kris Kennaway wrote:
>
> This is coming along very nicely indeed!
>
> One suggestion I have is that as more metrics are added it becomes
> important for an "at a glance" overview of changes so we can monitor for
> performance improvements and regressions among many workloads.
 >
> One way to do this would be a matrix of each metric with its change
> compared to recent samples.  e.g. you could do a student's T comparison
> of today's numbers with those from yesterday, or from a week ago, and
> colour-code those that show a significant deviation from "no change".
> This might be a bit noisy on short timescales, so you could aggregrate
> data into larger bins and compare e.g. moving 1-week aggregates.
> Fluctuations on short timescales won't stand out, but if there is a real
> change then it will show up less than a week later.

I agree that there's a need for an overview and some sort of
notification. I've been collecting historical data to get a baseline for
the statistics and I'll try to see what I can do over the next weeks.

> These significant events could also be graphed themselves and/or a
> history log maintained (or automatically annotated on the individual
> graphs) so historical changes can also be pinpointed.
>
> At some point the ability to annotate the data will become important
> (e.g. "We understand the cause of this, it was r1.123 of foo.c, which
> was corrected in r1.124.  The developer responsible has been shot.")

There's a field in the database for this sort of thing. I just think it
needs some sort of authentication. That'll have to wait a bit.

> P.S. If I understand correctly, the float test shows a regression?  The
> metric is calculations/second, so higher = better?

The documentation on Unixbench is scarce, but I would think so.

BTW if anyone's interested my SVN repo is online at:

svn://littlebit.dk/website/trunk    (Pylons project)
svn://littlebit.dk/tracker/trunk    (sh/Python scripts for runnning the
server and slaves)

Be careful with your eyes - this is my first attempt at both shell
scripting and Python :-)

Erik
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Kris Kennaway-3
Erik Cederstrand wrote:

> Kris Kennaway wrote:
>>
>> This is coming along very nicely indeed!
>>
>> One suggestion I have is that as more metrics are added it becomes
>> important for an "at a glance" overview of changes so we can monitor
>> for performance improvements and regressions among many workloads.
>  >
>> One way to do this would be a matrix of each metric with its change
>> compared to recent samples.  e.g. you could do a student's T
>> comparison of today's numbers with those from yesterday, or from a
>> week ago, and colour-code those that show a significant deviation from
>> "no change". This might be a bit noisy on short timescales, so you
>> could aggregrate data into larger bins and compare e.g. moving 1-week
>> aggregates. Fluctuations on short timescales won't stand out, but if
>> there is a real change then it will show up less than a week later.
>
> I agree that there's a need for an overview and some sort of
> notification. I've been collecting historical data to get a baseline for
> the statistics and I'll try to see what I can do over the next weeks.
>
>> These significant events could also be graphed themselves and/or a
>> history log maintained (or automatically annotated on the individual
>> graphs) so historical changes can also be pinpointed.
>>
>> At some point the ability to annotate the data will become important
>> (e.g. "We understand the cause of this, it was r1.123 of foo.c, which
>> was corrected in r1.124.  The developer responsible has been shot.")
>
> There's a field in the database for this sort of thing. I just think it
> needs some sort of authentication. That'll have to wait a bit.

Sounds good.

>> P.S. If I understand correctly, the float test shows a regression?  
>> The metric is calculations/second, so higher = better?
>
> The documentation on Unixbench is scarce, but I would think so.

Interesting.  Some candidate changes from 2007-10-02:

   Modified files:
     contrib/gcc          opts.c
   Log:
   Do not imply -ftree-vrp with -O2 and above.  One must implicitly specify
   '-ftree-vrp' if one wants it.
   Some bad code generation has been tracked to -ftree-vrp.  jdk1{5,6} are
   notable examples.

     sys/kern             sched_ule.c
   Log:
    - Move the rebalancer back into hardclock to prevent potential softclock
      starvation caused by unbalanced interrupt loads.
    - Change the rebalancer to work on stathz ticks but retain
randomization.
    - Simplify locking in tdq_idled() to use the tdq_lock_pair() rather than
      complex sequences of locks to avoid deadlock.


     sys/kern             sched_ule.c
   Log:
    - Reassign the thread queue lock to newtd prior to switching.  Assigning
      after the switch leads to a race where the outgoing thread still owns
      the local queue lock while another cpu may switch it in.  This race
      is only possible on machines where cpu_switch can take significantly
      longer on different cpus which in practice means HTT machines with
      unfair thread scheduling algorithms.

Is anyone else able to look into this?

> BTW if anyone's interested my SVN repo is online at:
>
> svn://littlebit.dk/website/trunk    (Pylons project)
> svn://littlebit.dk/tracker/trunk    (sh/Python scripts for runnning the
> server and slaves)
>
> Be careful with your eyes - this is my first attempt at both shell
> scripting and Python :-)

:)

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Robert N. M. Watson-2
In reply to this post by Kris Kennaway-3

On Wed, 23 Jan 2008, Erik Cederstrand wrote:

> I'd like to send a small update on my progress on the Performance Tracker
> project.
>
> I now have a small setup of a server and a slave chugging along, currently
> collecting data. I'm following CURRENT and collecting results from
> super-smack and unixbench.
>
> The project still needs some work, but there's a temporary web interface to
> the data here: http://littlebit.dk:5000/plot/. Apart from the plotting it's
> possible to compare two dates and see the files that have changed. Error
> bars are 3*standard deviation, for the points with multiple measurements.
>
> Of interest is e.g. super-smack (select-key, 1 client) right when the
> GENERIC kernel was moved from the 4BSD to ULE scheduler on Oct. 19.
> Unixbench (arithmetic test, float) also has a significant jump on Oct. 3.
>
> There setup of the slave is documented roughly on the page but I'll be
> writing a full report and documentation over the next month.
>
> Comments are very welcome but please followup on performance@.

This looks really exciting!

Do you plan to add a way so that people can submit performance data?  I.e., if
I set up my own test box and want to submit a result once a week for that,
will there be a way for me to get set up with a username/password, submit
configuration information, and then automatically submit test result
datapoints?  Especially if I can specify both the X and Y coordinates so that
I can backdate results should I go back and generate test results for old
kernels?

Very neat, and I look forward to seein gmore!

Robert N M Watson
Computer Laboratory
University of Cambridge
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Robert N. M. Watson-2
In reply to this post by Erik Cederstrand
On Wed, 23 Jan 2008, Erik Cederstrand wrote:

>> One way to do this would be a matrix of each metric with its change
>> compared to recent samples.  e.g. you could do a student's T comparison of
>> today's numbers with those from yesterday, or from a week ago, and
>> colour-code those that show a significant deviation from "no change". This
>> might be a bit noisy on short timescales, so you could aggregrate data into
>> larger bins and compare e.g. moving 1-week aggregates. Fluctuations on
>> short timescales won't stand out, but if there is a real change then it
>> will show up less than a week later.
>
> I agree that there's a need for an overview and some sort of notification.
> I've been collecting historical data to get a baseline for the statistics
> and I'll try to see what I can do over the next weeks.

A thumbnail page of graphs would be quite neat also. :-)

>> These significant events could also be graphed themselves and/or a history
>> log maintained (or automatically annotated on the individual graphs) so
>> historical changes can also be pinpointed.
>>
>> At some point the ability to annotate the data will become important (e.g.
>> "We understand the cause of this, it was r1.123 of foo.c, which was
>> corrected in r1.124.  The developer responsible has been shot.")
>
> There's a field in the database for this sort of thing. I just think it
> needs some sort of authentication. That'll have to wait a bit.

Sounds great -- it would be nice to be able to have a few annotations such as
"RELENG_7 branchpoint", "7.0 release", that could then appear as vertical
lines in the graphs, and likewise things like "netisr made default", "libthr
becomes default".

Finally, in the interests of making your life more complicated, it would be
neat to graph performance across a set of FreeBSD branches overlaid or
vertically offset so you could monitor, say, MySQL performance on 8-CURRENT,
7-STABLE, and 6-STABLE over time.

Robert N M Watson
Computer Laboratory
University of Cambridge
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Brooks Davis-2
In reply to this post by Kris Kennaway-3
On Wed, Jan 23, 2008 at 05:48:23AM +0100, Erik Cederstrand wrote:

> Hi
>
> I'd like to send a small update on my progress on the Performance Tracker
> project.
>
> I now have a small setup of a server and a slave chugging along, currently
> collecting data. I'm following CURRENT and collecting results from
> super-smack and unixbench.
>
> The project still needs some work, but there's a temporary web interface to
> the data here: http://littlebit.dk:5000/plot/. Apart from the plotting it's
> possible to compare two dates and see the files that have changed. Error
> bars are 3*standard deviation, for the points with multiple measurements.
>
> Of interest is e.g. super-smack (select-key, 1 client) right when the
> GENERIC kernel was moved from the 4BSD to ULE scheduler on Oct. 19.
> Unixbench (arithmetic test, float) also has a significant jump on Oct. 3.
>
> There setup of the slave is documented roughly on the page but I'll be
> writing a full report and documentation over the next month.
Nice work so far!  It's neat that you've already been able to spot changes.

A couple suggestions for the graphs.  It would be nice if the graphs had an
arrow indicating the "good" direction (i.e. which direction of movement is an
improvement).  Also, a graph of the derivative of the curve might be
interesting for at a glance trending.

-- Brooks

attachment0 (194 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Erik Cederstrand
In reply to this post by Robert N. M. Watson-2
Robert Watson wrote:

>
> This looks really exciting!
>
> Do you plan to add a way so that people can submit performance data?  
> I.e., if I set up my own test box and want to submit a result once a
> week for that, will there be a way for me to get set up with a
> username/password, submit configuration information, and then
> automatically submit test result datapoints?  Especially if I can
> specify both the X and Y coordinates so that I can backdate results
> should I go back and generate test results for old kernels?

The website and benchmarking slave talk to a PostgreSQL database. It's
definitely possible and part of the design to have multiple computers
participating in a distributed fashion, although it's also possible to
run the setup locally and privately for more ad-hoc testing purposes.

I think it's best if participating machines supply data regularly for an
extended period of time. Single or infrequent data points for a specific
configuration don't make much sense. We need to compare apples to apples.

Erik
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Robert N. M. Watson-2
On Wed, 23 Jan 2008, Erik Cederstrand wrote:

> Robert Watson wrote:
>>
>> This looks really exciting!
>>
>> Do you plan to add a way so that people can submit performance data?
>> I.e., if I set up my own test box and want to submit a result once a week
>> for that, will there be a way for me to get set up with a
>> username/password, submit configuration information, and then automatically
>> submit test result datapoints?  Especially if I can specify both the X and
>> Y coordinates so that I can backdate results should I go back and generate
>> test results for old kernels?
>
> The website and benchmarking slave talk to a PostgreSQL database. It's
> definitely possible and part of the design to have multiple computers
> participating in a distributed fashion, although it's also possible to run
> the setup locally and privately for more ad-hoc testing purposes.

Sounds good.

> I think it's best if participating machines supply data regularly for an
> extended period of time. Single or infrequent data points for a specific
> configuration don't make much sense. We need to compare apples to apples.

Yes -- I was mostly thinking about backdating in order to play "catchup" when
a new benchmark is introduced.

Robert N M Watson
Computer Laboratory
University of Cambridge
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Kris Kennaway-3
Robert Watson wrote:

>> I think it's best if participating machines supply data regularly for
>> an extended period of time. Single or infrequent data points for a
>> specific configuration don't make much sense. We need to compare
>> apples to apples.
>
> Yes -- I was mostly thinking about backdating in order to play "catchup"
> when a new benchmark is introduced.

One thing I am looking at is how to best create a library of world
tarballs that can be used to populate a nfsroot (or hybrid of periodic
tarballs + binary diffs to save space).  Then you could provide your
benchmark in a standardized format (start/end/cleanup scripts, etc) and
tell a machine "go and run this benchmark on every daily snapshot for
the last year and give me the numbers".

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Erik Cederstrand
In reply to this post by Robert N. M. Watson-2
Robert Watson wrote:
> On Wed, 23 Jan 2008, Erik Cederstrand wrote:
>
>> I agree that there's a need for an overview and some sort of
>> notification. I've been collecting historical data to get a baseline
>> for the statistics and I'll try to see what I can do over the next weeks.
>
> A thumbnail page of graphs would be quite neat also. :-)

I'd like to do that, but I'm a bit afraid of hitting the webserver too
hard. Data is changing constantly, so graphs are created on the fly. I
could probably cache some small versions though and put them on an
overview page.

>>> At some point the ability to annotate the data will become important
>>> (e.g. "We understand the cause of this, it was r1.123 of foo.c, which
>>> was corrected in r1.124.  The developer responsible has been shot.")
>>
>> There's a field in the database for this sort of thing. I just think
>> it needs some sort of authentication. That'll have to wait a bit.
>
> Sounds great -- it would be nice to be able to have a few annotations
> such as "RELENG_7 branchpoint", "7.0 release", that could then appear as
> vertical lines in the graphs, and likewise things like "netisr made
> default", "libthr becomes default".

Good idea. Thanks!

> Finally, in the interests of making your life more complicated, it would
> be neat to graph performance across a set of FreeBSD branches overlaid
> or vertically offset so you could monitor, say, MySQL performance on
> 8-CURRENT, 7-STABLE, and 6-STABLE over time.

It's supported by the tools, database and website but currently I just
have 2 (slow) machines to work with at the university. The server is
churning out images for the slave to consume every 2 hours which is an
acceptable sample rate if the intent is to monitor CURRENT development
and provide performance alerts much like the tinderboxes do now. Adding
releases would seriously hurt that rate.

My focus has been strictly on comparing CVS versions because trying to
cover anything else quickly makes the configuration space explode and
the sample rate go down the drain. Lots of interesting comparisons could
be made, however, provided sufficient hardware supplies. Comparing
branches would on the top of the list.

Erik
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Erik Cederstrand
In reply to this post by Kris Kennaway-3
Kris Kennaway wrote:

> Robert Watson wrote:
>>
>> Yes -- I was mostly thinking about backdating in order to play
>> "catchup" when a new benchmark is introduced.
>
> One thing I am looking at is how to best create a library of world
> tarballs that can be used to populate a nfsroot (or hybrid of periodic
> tarballs + binary diffs to save space).  Then you could provide your
> benchmark in a standardized format (start/end/cleanup scripts, etc) and
> tell a machine "go and run this benchmark on every daily snapshot for
> the last year and give me the numbers".

That's basically what my server does. It creates world/kernel tarballs
(around 90MB), dumps them in an NFS exported folder and adds them to a
queue for the slave to consume. The slave installs the tarball on the
local disk (via PXE) in less than 2 mins and runs whatever benchmarks it
was told. I can pretty easily add more benchmarks later and let the
slave collect the data using the tarballs I already created. Currently
the benchmark script is contained within the tarball, so each tarball
would need to be opened and closed to replace the script. I'd like to
change that.

It gets a bit complicated when the benchmark depends on another
application (e.g. super-smack and mysql). Currently I compile mysql from
scratch on every new world using whichever version was the latest in the
ports tree at the time. This has some disadvantages.

Erik
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Ivan Voras
In reply to this post by Kris Kennaway-3
Kris Kennaway wrote:

>> The project still needs some work, but there's a temporary web
>> interface to the data here: http://littlebit.dk:5000/plot/. Apart from
>> the plotting it's possible to compare two dates and see the files that
>> have changed. Error bars are 3*standard deviation, for the points with
>> multiple measurements.

I have a suggestion to make the graphs more readable: if a long period
was chosen by the user (e.g. > 100 days / plot points), don't plot
points and error bars, plot a simple line through the points. Also, set
all date strings on the X-axis to empty strings except for the dates on
1/10ths of the interval.

Did you remove WITNESS,INVARIANTS and malloc debugging for the benchmarks?


signature.asc (258 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Kris Kennaway-3
In reply to this post by Kris Kennaway-3
Kris Kennaway wrote:

>>> P.S. If I understand correctly, the float test shows a regression?  
>>> The metric is calculations/second, so higher = better?
>>
>> The documentation on Unixbench is scarce, but I would think so.
>
> Interesting.  Some candidate changes from 2007-10-02:
>
>   Modified files:
>     contrib/gcc          opts.c
>   Log:
>   Do not imply -ftree-vrp with -O2 and above.  One must implicitly specify
>   '-ftree-vrp' if one wants it.
>   Some bad code generation has been tracked to -ftree-vrp.  jdk1{5,6} are
>   notable examples.

OK, so it was this one.  The other interesting events seem to be:

2007-10-20: drop in super-smack performance and context switch
benchmarks.  This is due to the switch from SCHED_4BSD to SCHED_ULE
(super-smack is largely a context switch benchmark due to retarded
design).  There are uncommitted patches that reduce ULE context switch
overhead though, so it will be interesting to see how they affect this.

2007-12-30: file read/pipe read/pipe ping-pong/syscall overhead
performance increases.  This is due to Jeff's lockless struct file
changes (syscall overhead is only affected because unixbench uses the
dup2() syscall which is not in fact just a measure of syscall overhead
but now has reduced non-syscall cost).

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Alexander Leidinger
In reply to this post by Erik Cederstrand
Quoting Erik Cederstrand <[hidden email]> (from Wed, 23 Jan 2008  
21:59:42 +0100):

>> Finally, in the interests of making your life more complicated, it  
>> would be neat to graph performance across a set of FreeBSD branches  
>>  overlaid or vertically offset so you could monitor, say, MySQL  
>> performance on 8-CURRENT, 7-STABLE, and 6-STABLE over time.
>
> It's supported by the tools, database and website but currently I just
> have 2 (slow) machines to work with at the university. The server is
> churning out images for the slave to consume every 2 hours which is an
> acceptable sample rate if the intent is to monitor CURRENT development
> and provide performance alerts much like the tinderboxes do now. Adding
> releases would seriously hurt that rate.
>
> My focus has been strictly on comparing CVS versions because trying to
> cover anything else quickly makes the configuration space explode and
> the sample rate go down the drain. Lots of interesting comparisons
> could be made, however, provided sufficient hardware supplies.
> Comparing branches would on the top of the list.

In case you have some space left for more machines, maybe someone is  
willing to help out by sponsoring some. Just tell us how many machines  
you can handle (space/power/...) and if you are interested that we put  
it up on our wantlist.

Bye,
Alexander.

--
http://www.Leidinger.net    Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org       netchild @ FreeBSD.org  : PGP ID = 72077137
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Borja Marcos
In reply to this post by Kris Kennaway-3
                               
On Jan 23, 2008, at 12:44 PM, Kris Kennaway wrote:

> One suggestion I have is that as more metrics are added it becomes  
> important for an "at a glance" overview of changes so we can monitor  
> for performance improvements and regressions among many workloads.
>
> One way to do this would be a matrix of each metric with its change  
> compared to recent samples.  e.g. you could do a student's T  
> comparison of today's numbers with those from yesterday, or from a  
> week ago, and colour-code those that show a significant deviation  
> from "no change". This might be a bit noisy on short timescales, so  
> you could aggregrate data into larger bins and compare e.g. moving 1-
> week aggregates. Fluctuations on short timescales won't stand out,  
> but if there is a real change then it will show up less than a week  
> later.

And now for some publicity :)

I am a big fan of Orca (www.orcaware.com) which I have used regularly  
with Solaris *and*
FreeBSD. I wrote a performance data collector for FreeBSD, actually.

The advantage of Orca is that it gets simple text files with the  
performance data
and then it generates graphs and web pages periodically, keeping the  
data in RRD
databases. I think Orca would be really useful for this project.

And of course, if anyone wants to give "Devilator" a try, it currently  
graphs
CPU usage in system, interrupt, user time, disk I/O bandwidth, memory  
usage,
system and SWI CPU usage, process sleeping reasons... So far it's been  
really
helpful for me in order to identify bottlenecks.

Regards,






Borja.

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Erik Cederstrand
In reply to this post by Alexander Leidinger
Alexander Leidinger wrote:
>
> In case you have some space left for more machines, maybe someone is
> willing to help out by sponsoring some. Just tell us how many machines
> you can handle (space/power/...) and if you are interested that we put
> it up on our wantlist.

I'm sharing an office with 4 others and I'm leaving for Real Work in a
month, so I'm not really in a position to expand at the moment. However,
I'd like to do so later on, and also get a more permanent and reliable
location for the machines.

After weeding out the worst bugs the system runs pretty much by itself,
but since this is CURRENT there's always the possibility of the slaves
needing attention.

Erik
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Erik Cederstrand
In reply to this post by Ivan Voras
Ivan Voras wrote:
>
> I have a suggestion to make the graphs more readable: if a long period
> was chosen by the user (e.g. > 100 days / plot points), don't plot
> points and error bars, plot a simple line through the points. Also, set
> all date strings on the X-axis to empty strings except for the dates on
> 1/10ths of the interval.

Noted. Thanks.

> Did you remove WITNESS,INVARIANTS and malloc debugging for the benchmarks?

The kernel configuration file has:
include GENERIC
PERFMON nomakeoptions DEBUG
PERFMON nooptions INVARIANTS
PERFMON nooptions GDB
PERFMON nooptions DDB
PERFMON nooptions KDB
PERFMON nooptions WITNESS
PERFMON nooptions WITNESS_SKIPSPIN
PERFMON nooptions INVARIANT_SUPPORT

I also ship the images with a GENERIC kernel in case debugging is needed.

I haven't touched malloc.conf but realize that I should. What's the
official recommendation on malloc settings?

Erik
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Ivan Voras
Erik Cederstrand wrote:

> I haven't touched malloc.conf but realize that I should. What's the
> official recommendation on malloc settings?

You'd have to patch /usr/src/lib/libc/stdlib/malloc.c and define
MALLOC_PRODUCTION. Yes, it's not elegant.



signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Kris Kennaway-3
In reply to this post by Erik Cederstrand
Erik Cederstrand wrote:
> Ivan Voras wrote:
>>
>> I have a suggestion to make the graphs more readable: if a long period
>> was chosen by the user (e.g. > 100 days / plot points), don't plot
>> points and error bars, plot a simple line through the points. Also,
>> set all date strings on the X-axis to empty strings except for the
>> dates on 1/10ths of the interval.
>
> Noted. Thanks.

Actually the error bars are quite important to see what is going on.
Some of the metrics are very (too) noisy and if you only look at the
data points they sometimes appear to have a signal when they don't.
Ultimately that just means more data points should be taken per run for
those metrics, but the error bars are the signal for this.

>> Did you remove WITNESS,INVARIANTS and malloc debugging for the
>> benchmarks?
>
> The kernel configuration file has:
> include GENERIC
> PERFMON    nomakeoptions    DEBUG
> PERFMON    nooptions    INVARIANTS
> PERFMON    nooptions    GDB
> PERFMON    nooptions    DDB
> PERFMON    nooptions    KDB
> PERFMON    nooptions    WITNESS
> PERFMON    nooptions    WITNESS_SKIPSPIN
> PERFMON    nooptions    INVARIANT_SUPPORT
>
> I also ship the images with a GENERIC kernel in case debugging is needed.
>
> I haven't touched malloc.conf but realize that I should. What's the
> official recommendation on malloc settings?

For benchmarking you should enable MALLOC_PRODUCTION in
src/lib/libc/stdlib/malloc.c.  Anyway, for now I am not worried about
the particular benchmarks you are running, because they are just
demonstrators for the framework and we can easily go back and add more
in later on.

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Performance Tracker project update

Kris Kennaway-3
In reply to this post by Erik Cederstrand
Erik Cederstrand wrote:

> Alexander Leidinger wrote:
>>
>> In case you have some space left for more machines, maybe someone is
>> willing to help out by sponsoring some. Just tell us how many machines
>> you can handle (space/power/...) and if you are interested that we put
>> it up on our wantlist.
>
> I'm sharing an office with 4 others and I'm leaving for Real Work in a
> month, so I'm not really in a position to expand at the moment. However,
> I'd like to do so later on, and also get a more permanent and reliable
> location for the machines.
>
> After weeding out the worst bugs the system runs pretty much by itself,
> but since this is CURRENT there's always the possibility of the slaves
> needing attention.

I don't see machine availability being a problem once we are ready to
"take this live".

Kris

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
12