Quantcast

vdev state changed & zfs scrub

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

vdev state changed & zfs scrub

Dan Langille
I see this on more than one system:

Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552

zpool status output includes:

$ zpool status
  pool: system
 state: ONLINE
  scan: scrub in progress since Wed Apr 19 03:12:22 2017
        2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
        0 repaired, 41.94% done

The timing of the scrub is not coincidental.

Why is vdev status changing?

Thank you.

--
Dan Langille - BSDCan / PGCon
[hidden email]


_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: vdev state changed & zfs scrub

Johan Hendriks-3
Op 19/04/2017 om 16:56 schreef Dan Langille:

> I see this on more than one system:
>
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
>
> zpool status output includes:
>
> $ zpool status
>   pool: system
>  state: ONLINE
>   scan: scrub in progress since Wed Apr 19 03:12:22 2017
>         2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>         0 repaired, 41.94% done
>
> The timing of the scrub is not coincidental.
>
> Why is vdev status changing?
>
> Thank you.
>
I have the same "issue", I asked this in the stable list but did not got
any reaction.
https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html

In my initial mail it was only one machine running 11.0, the rest was
running 10.x.
Now I have upgraded other machines to 11.0 and I see it there also.

regards
Johan Hendriks



_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: vdev state changed & zfs scrub

Andriy Gapon
On 20/04/2017 12:39, Johan Hendriks wrote:

> Op 19/04/2017 om 16:56 schreef Dan Langille:
>> I see this on more than one system:
>>
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
>>
>> zpool status output includes:
>>
>> $ zpool status
>>   pool: system
>>  state: ONLINE
>>   scan: scrub in progress since Wed Apr 19 03:12:22 2017
>>         2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>>         0 repaired, 41.94% done
>>
>> The timing of the scrub is not coincidental.
>>
>> Why is vdev status changing?
>>
>> Thank you.
>>
> I have the same "issue", I asked this in the stable list but did not got
> any reaction.
> https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html
>
> In my initial mail it was only one machine running 11.0, the rest was
> running 10.x.
> Now I have upgraded other machines to 11.0 and I see it there also.

Previously none of ZFS events were logged at all, that's why you never saw them.
As to those particular events, unfortunately two GUIDs is all that the event
contains.  So, to get the state you have to explicitly check it, for example,
with zpool status.  It could be that the scrub is simply re-opening the devices,
so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to
VDEV_STATE_HEALTHY.  You can simply ignore those reports if you don't see any
trouble.
Maybe lower priority of those messages in /etc/devd/zfs.conf...

--
Andriy Gapon
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: vdev state changed & zfs scrub

Dan Langille
> On Apr 20, 2017, at 7:18 AM, Andriy Gapon <[hidden email]> wrote:
>
> On 20/04/2017 12:39, Johan Hendriks wrote:
>> Op 19/04/2017 om 16:56 schreef Dan Langille:
>>> I see this on more than one system:
>>>
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
>>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
>>>
>>> zpool status output includes:
>>>
>>> $ zpool status
>>>  pool: system
>>> state: ONLINE
>>>  scan: scrub in progress since Wed Apr 19 03:12:22 2017
>>>        2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>>>        0 repaired, 41.94% done
>>>
>>> The timing of the scrub is not coincidental.
>>>
>>> Why is vdev status changing?
>>>
>>> Thank you.
>>>
>> I have the same "issue", I asked this in the stable list but did not got
>> any reaction.
>> https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html
>>
>> In my initial mail it was only one machine running 11.0, the rest was
>> running 10.x.
>> Now I have upgraded other machines to 11.0 and I see it there also.
>
> Previously none of ZFS events were logged at all, that's why you never saw them.
> As to those particular events, unfortunately two GUIDs is all that the event
> contains.  So, to get the state you have to explicitly check it, for example,
> with zpool status.  It could be that the scrub is simply re-opening the devices,
> so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to
> VDEV_STATE_HEALTHY.  You can simply ignore those reports if you don't see any
> trouble.
> Maybe lower priority of those messages in /etc/devd/zfs.conf...

I found the relevant entries in said file:

notify 10 {
        match "system"          "ZFS";
        match "type"            "resource.fs.zfs.statechange";
        action "logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=$pool_guid vdev_guid=$vdev_guid'";
};

Is 10 priority the current priority?

At first, I thought it might be kern.notice, but reading man syslog.conf, notice is a level, not a priority.

I've change 10 to a 1 and we shall see.

Thank you.

--
Dan Langille - BSDCan / PGCon
[hidden email]


_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: vdev state changed & zfs scrub

Martin Simmons
>>>>> On Thu, 20 Apr 2017 07:42:47 -0400, Dan Langille said:
>
> > On Apr 20, 2017, at 7:18 AM, Andriy Gapon <[hidden email]> wrote:
> >
> > On 20/04/2017 12:39, Johan Hendriks wrote:
> >> Op 19/04/2017 om 16:56 schreef Dan Langille:
> >>> I see this on more than one system:
> >>>
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
> >>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
> >>>
> >>> zpool status output includes:
> >>>
> >>> $ zpool status
> >>>  pool: system
> >>> state: ONLINE
> >>>  scan: scrub in progress since Wed Apr 19 03:12:22 2017
> >>>        2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
> >>>        0 repaired, 41.94% done
> >>>
> >>> The timing of the scrub is not coincidental.
> >>>
> >>> Why is vdev status changing?
> >>>
> >>> Thank you.
> >>>
> >> I have the same "issue", I asked this in the stable list but did not got
> >> any reaction.
> >> https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html
> >>
> >> In my initial mail it was only one machine running 11.0, the rest was
> >> running 10.x.
> >> Now I have upgraded other machines to 11.0 and I see it there also.
> >
> > Previously none of ZFS events were logged at all, that's why you never saw them.
> > As to those particular events, unfortunately two GUIDs is all that the event
> > contains.  So, to get the state you have to explicitly check it, for example,
> > with zpool status.  It could be that the scrub is simply re-opening the devices,
> > so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to
> > VDEV_STATE_HEALTHY.  You can simply ignore those reports if you don't see any
> > trouble.
> > Maybe lower priority of those messages in /etc/devd/zfs.conf...
>
> I found the relevant entries in said file:
>
> notify 10 {
>         match "system"          "ZFS";
>         match "type"            "resource.fs.zfs.statechange";
>         action "logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=$pool_guid vdev_guid=$vdev_guid'";
> };
>
> Is 10 priority the current priority?
>
> At first, I thought it might be kern.notice, but reading man syslog.conf, notice is a level, not a priority.

No, I think he meant change kern.notice to something else such as kern.info so
you don't see them in /var/log/messages (as controlled by /etc/syslog.conf).

__Martin
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: vdev state changed & zfs scrub

Andriy Gapon
On 20/04/2017 18:14, Martin Simmons wrote:
> No, I think he meant change kern.notice to something else such as kern.info so
> you don't see them in /var/log/messages (as controlled by /etc/syslog.conf).

Yes, that's exactly what I meant.
Sorry for not being clear.

--
Andriy Gapon
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "[hidden email]"
Loading...