Re: OpenZFS and L2ARC

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: OpenZFS and L2ARC

grahamperrin
On 08/09/2020 08:43, Stefan Esser wrote:
> OpenZFS seems to work quite well for me, in general, but I have
> questions regarding the L2ARC statistics.
>


> The sysutils/zfs-stats port reports the following values for
> this system, BTW:
>
> ------------------------------------------------------------------------
> ZFS Subsystem Report                Tue Sep  8 09:02:46 2020
> ------------------------------------------------------------------------
>


>
> Quite a number of sysctl variable names have changed, and the port
> needs to be adapted to the new names (therefore there are lots of 0
> values in the -L output).
>
> The following names used by zfs-stats do not exist in OpenZFS:
>
> kstat.zfs.misc.arcstats.recycle_miss
> kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned
> kstat.zfs.misc.arcstats.l2_write_buffer_iter
> kstat.zfs.misc.arcstats.l2_write_buffer_list_iter
> kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter
> kstat.zfs.misc.arcstats.l2_write_full
> kstat.zfs.misc.arcstats.l2_write_in_l2
> kstat.zfs.misc.arcstats.l2_write_io_in_progress
> kstat.zfs.misc.arcstats.l2_write_not_cacheable
> kstat.zfs.misc.arcstats.l2_write_passed_headroom
> kstat.zfs.misc.arcstats.l2_write_pios
> kstat.zfs.misc.arcstats.l2_write_spa_mismatch
> kstat.zfs.misc.arcstats.l2_write_trylock_fail
> kstat.zfs.misc.arcstats.l2_writes_hdr_miss
> vfs.zfs.vdev.cache.size
>
> The existence of vfs.zfs.vdev.cache.size vs vfs.zfs.vdev.cache_size
> can be used to detect OpenZFS, and is easily fixed.
>
> But the above listed L2ARC values seem to have been removed from or
> have never existed in OpenZFS, and I did not find any substitutes.
>
> Are there any plans to re-create them in OpenZFS on FreeBSD or are
> they gone for good?

Recalling
<https://lists.freebsd.org/pipermail/freebsd-current/2020-March/075661.html>,
on 28/03/2020 15:17,28/03/2020 15:17, Allan Jude wrote:

 >> …
 >>
 >> Basically 'arc' was converted to a subtree.
 >>
 >> We should add some backwards compat sysctls to cover some of
 >> these renames etc so configs and scripts don't break etc.

>
> I'd like to update the zfs-stats port for compatibilíty with OpenZFS ...
>
Cool!
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

time sysctl kstat.zfs.misc.dbufs | wc (was: OpenZFS and L2ARC)

grahamperrin
On 09/09/2020 07:46, Stefan Esser wrote:

> … an annoyance that I had noticed before but now have
> tracked down:
>
> $ time sysctl kstat.zfs.misc.dbufs | wc
>    55327 2047031 16333472
>
> real    0m16,446s
> user    0m0,055s
> sys    0m16,397s
>
> …


Here, I get much scrolling but no output from time:


root@momh167-gjp4-8570p:~ # date ; uname -v
Fri Sep 11 05:11:33 BST 2020
FreeBSD 13.0-CURRENT #64 r365364: Sun Sep  6 01:38:18 BST 2020
root@momh167-gjp4-8570p:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG
root@momh167-gjp4-8570p:~ # time sysctl kstat.zfs.misc.dbufs | wc






























































































































































































































    34699 1283795 10181844
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #
root@momh167-gjp4-8570p:~ #

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: time sysctl kstat.zfs.misc.dbufs | wc (was: OpenZFS and L2ARC)

Brandon Bergren-3
On Thu, Sep 10, 2020, at 11:17 PM, Graham Perrin wrote:

> On 09/09/2020 07:46, Stefan Esser wrote:
> > … an annoyance that I had noticed before but now have
> > tracked down:
> >
> > $ time sysctl kstat.zfs.misc.dbufs | wc
> >    55327 2047031 16333472
> >
> > real    0m16,446s
> > user    0m0,055s
> > sys    0m16,397s
> >
> > …
>
>

That's nothing:

root@talos:~/devel/poudriere # /usr/bin/time sysctl kstat.zfs.misc.dbufs | wc
      603.59 real         0.03 user       603.39 sys
   63677 2355981 18646506

It literally takes ten minutes on my Talos II.

--
  Brandon Bergren
  [hidden email]
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: time sysctl kstat.zfs.misc.dbufs | wc (was: OpenZFS and L2ARC)

Brandon Bergren-3


On Thu, Sep 10, 2020, at 11:50 PM, Brandon Bergren wrote:

> On Thu, Sep 10, 2020, at 11:17 PM, Graham Perrin wrote:
> > On 09/09/2020 07:46, Stefan Esser wrote:
> > > … an annoyance that I had noticed before but now have
> > > tracked down:
> > >
> > > $ time sysctl kstat.zfs.misc.dbufs | wc
> > >    55327 2047031 16333472
> > >
> > > real    0m16,446s
> > > user    0m0,055s
> > > sys    0m16,397s
> > >
> > > …
> >
> >
>
> That's nothing:
>
> root@talos:~/devel/poudriere # /usr/bin/time sysctl kstat.zfs.misc.dbufs | wc
>       603.59 real         0.03 user       603.39 sys
>    63677 2355981 18646506
>
> It literally takes ten minutes on my Talos II.

FWIW:

Tracing command sysctl pid 25337 tid 104535 td 0xc00800010362b600 (CPU 59)
0xc00800015424cba0: at intr_event_handle+0x130
0xc00800015424cc40: at powerpc_dispatch_intr+0x8c
0xc00800015424ccc0: at xive_dispatch+0x94
0xc00800015424cd50: at PIC_DISPATCH+0x78
0xc00800015424cd90: at powerpc_interrupt+0xb8
0xc00800015424ce20: kernel trap 0xea0 by memset+0x10: srr1=0x9000000000009032
            r1=0xc00800015424d0d0 cr=0x42400004 xer=0 ctr=0xded r2=0xc000000003a57000 frame=0xc00800015424ce50
0xc00800015424d0d0: at dbuf_stats_hash_table_data+0x1e4
0xc00800015424d180: at kstat_sysctl_raw+0x1e8
0xc00800015424d250: at sysctl_root_handler_locked+0x104
0xc00800015424d2c0: at sysctl_root+0x294
0xc00800015424d3b0: at userland_sysctl+0x174
0xc00800015424d4c0: at sys___sysctl+0x8c
0xc00800015424d5b0: at syscallenter+0x184
0xc00800015424d600: at syscall+0x60
0xc00800015424d640: at trap+0x440
0xc00800015424d750: at powerpc_interrupt+0x110
0xc00800015424d7e0: user SC trap by 0x8102de5d0: srr1=0x900000000200f032
            r1=0xfffffbfffbfd0 cr=0x44000382 xer=0 ctr=0x8102de5c0 r2=0x810306bf0 frame=0xc00800015424d810

db> show frame 0xc00800015424ce50
trap frame 0xc00800015424ce50
  r0:   0xc000000002566044 (-4611686018388172732)
  r1:   0xc00800015424d0d0 (-4609434212907036464)
  r2:   0xc000000003a57000 (-4611686018366214144)
  r3:   0xc000000ca00cf000 (-4611685964202577920)
  r4:   0 (0)
  r5:   0x1000 (4096)
  r6:   0xc000000ca00cf212 (-4611685964202577390)
  r7:   0x155a0b7 (22388919)
  r8:   0x1ffffff (33554431)
  r9:   0 (0)
  r10:  0xc0000000035859fe (-4611686018371266050)
  r11:  0 (0)
  r12:  0xc0000000026145dc (-4611686018387458596)
  r13:  0xc00800010362b600 (-4609434214261934592)
  r14:  0x1003d230 (268685872)
  r15:  0x1003d230 (268685872)
  r16:  0x1003d230 (268685872)
  r17:  0x1003d230 (268685872)
  r18:  0x810317b08 (34631416584)
  r19:  0x155a0b7 (22388919)
  r20:  0 (0)
  r21:  0xc000000003b61600 (-4611686018365123072)
  r22:  0xc0000000035902b0 (-4611686018371222864)
  r23:  0xc0000000035859fe (-4611686018371266050)
  r24:  0xc000000003b21488 (-4611686018365385592)
  r25:  0xc0000000039ad5b0 (-4611686018366909008)
  r26:  0xc00000000261480c (-4611686018387458036)
  r27:  0xc00000000361d05a (-4611686018370645926)
  r28:  0xc000000ca00cf000 (-4611685964202577920)
  r29:  0x1000 (4096)
  r30:  0xc000000003b61600 (-4611686018365123072)
  r31:  0xc00800015424d0d0 (-4609434212907036464)
  lr:   0xc0000000026146a0
  cr:   0x42400004
  xer:  0
  ctr:  0xded (3565)
  srr0: 0xc0000000030a5cc0
  srr1: 0x9000000000009032
  exc:  0xea0
  dar:  0xc0080001f2036b23
  dsisr:        0x2000000

Every time I've looked in on it, it appears to be zeroing a page of memory. I believe there is
 something going wrong with the buffer management here.

--
  Brandon Bergren
  [hidden email]
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "[hidden email]"