panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

bob prohaska
A Pi3 running FreeBSD 13.0-CURRENT (GENERIC) #4 r356366 reported,
while compiling www/chromium:

panic: non-current pmap 0xfffffd000385f5a0
cpuid = 1
time = 1578525992
KDB: stack backtrace:
db_trace_self() at db_trace_self_wrapper+0x28
         pc = 0xffff000000736b9c  lr = 0xffff000000106814
         sp = 0xffff00005a92ae40  fp = 0xffff00005a92b050

db_trace_self_wrapper() at vpanic+0x18c
         pc = 0xffff000000106814  lr = 0xffff0000004088c0
         sp = 0xffff00005a92b060  fp = 0xffff00005a92b110

vpanic() at panic+0x44
         pc = 0xffff0000004088c0  lr = 0xffff000000408670
         sp = 0xffff00005a92b120  fp = 0xffff00005a92b1a0

panic() at pmap_remove_pages+0x8b4
         pc = 0xffff000000408670  lr = 0xffff00000074d890
         sp = 0xffff00005a92b1b0  fp = 0xffff00005a92b270

pmap_remove_pages() at exec_new_vmspace+0x188
         pc = 0xffff00000074d890  lr = 0xffff0000003c1250
         sp = 0xffff00005a92b280  fp = 0xffff00005a92b2d0

exec_new_vmspace() at exec_elf64_imgact+0x744
         pc = 0xffff0000003c1250  lr = 0xffff00000039a358
         sp = 0xffff00005a92b2e0  fp = 0xffff00005a92b3d0

exec_elf64_imgact() at kern_execve+0x458
         pc = 0xffff00000039a358  lr = 0xffff0000003c01b0
         sp = 0xffff00005a92b3e0  fp = 0xffff00005a92b730

kern_execve() at sys_execve+0x54
         pc = 0xffff0000003c01b0  lr = 0xffff0000003bfa2c
         sp = 0xffff00005a92b740  fp = 0xffff00005a92b7b0

sys_execve() at do_el0_sync+0x514
         pc = 0xffff0000003bfa2c  lr = 0xffff000000753c44
         sp = 0xffff00005a92b7c0  fp = 0xffff00005a92b860

do_el0_sync() at handle_el0_sync+0x90
         pc = 0xffff000000753c44  lr = 0xffff000000739224
         sp = 0xffff00005a92b870  fp = 0xffff00005a92b980

handle_el0_sync() at 0x214978
         pc = 0xffff000000739224  lr = 0x0000000000214978
         sp = 0xffff00005a92b990  fp = 0x0000ffffffffd490

KDB: enter: panic
[ thread pid 32060 tid 100321 ]
Stopped at      0x40425654
db> bt
Tracing pid 32060 tid 100321 td 0xfffffd000ba68000
db_trace_self() at db_stack_trace+0xf8
         pc = 0xffff000000736b9c  lr = 0xffff000000103c58
         sp = 0xffff00005a92aa10  fp = 0xffff00005a92aa40

db_stack_trace() at db_command+0x228
         pc = 0xffff000000103c58  lr = 0xffff0000001038d0
         sp = 0xffff00005a92aa50  fp = 0xffff00005a92ab30

db_command() at db_command_loop+0x58
         pc = 0xffff0000001038d0  lr = 0xffff000000103678
         sp = 0xffff00005a92ab40  fp = 0xffff00005a92ab60

db_command_loop() at db_trap+0xf4
         pc = 0xffff000000103678  lr = 0xffff00000010697c
         sp = 0xffff00005a92ab70  fp = 0xffff00005a92ad90

db_trap() at kdb_trap+0x1d8
         pc = 0xffff00000010697c  lr = 0xffff00000044fd70
         sp = 0xffff00005a92ada0  fp = 0xffff00005a92ae50
       
kdb_trap() at do_el1h_sync+0xf4
         pc = 0xffff00000044fd70  lr = 0xffff0000007535b8
         sp = 0xffff00005a92ae60  fp = 0xffff00005a92ae90

do_el1h_sync() at handle_el1h_sync+0x78
         pc = 0xffff0000007535b8  lr = 0xffff000000739078
         sp = 0xffff00005a92aea0  fp = 0xffff00005a92afb0

handle_el1h_sync() at kdb_enter+0x34
         pc = 0xffff000000739078  lr = 0xffff00000044f3bc
         sp = 0xffff00005a92afc0  fp = 0xffff00005a92b050

kdb_enter() at vpanic+0x1a8
         pc = 0xffff00000044f3bc  lr = 0xffff0000004088dc
         sp = 0xffff00005a92b060  fp = 0xffff00005a92b110

vpanic() at panic+0x44
         pc = 0xffff0000004088dc  lr = 0xffff000000408670
         sp = 0xffff00005a92b120  fp = 0xffff00005a92b1a0
       
panic() at pmap_remove_pages+0x8b4
         pc = 0xffff000000408670  lr = 0xffff00000074d890
         sp = 0xffff00005a92b1b0  fp = 0xffff00005a92b270

pmap_remove_pages() at exec_new_vmspace+0x188
         pc = 0xffff00000074d890  lr = 0xffff0000003c1250
         sp = 0xffff00005a92b280  fp = 0xffff00005a92b2d0

exec_new_vmspace() at exec_elf64_imgact+0x744
         pc = 0xffff0000003c1250  lr = 0xffff00000039a358
         sp = 0xffff00005a92b2e0  fp = 0xffff00005a92b3d0

exec_elf64_imgact() at kern_execve+0x458
         pc = 0xffff00000039a358  lr = 0xffff0000003c01b0
         sp = 0xffff00005a92b3e0  fp = 0xffff00005a92b730

kern_execve() at sys_execve+0x54
         pc = 0xffff0000003c01b0  lr = 0xffff0000003bfa2c
         sp = 0xffff00005a92b740  fp = 0xffff00005a92b7b0
       
sys_execve() at do_el0_sync+0x514
         pc = 0xffff0000003bfa2c  lr = 0xffff000000753c44
         sp = 0xffff00005a92b7c0  fp = 0xffff00005a92b860

do_el0_sync() at handle_el0_sync+0x90
         pc = 0xffff000000753c44  lr = 0xffff000000739224
         sp = 0xffff00005a92b870  fp = 0xffff00005a92b980

handle_el0_sync() at 0x214978
         pc = 0xffff000000739224  lr = 0x0000000000214978
         sp = 0xffff00005a92b990  fp = 0x0000ffffffffd490

db>

Hopefully this is of interest to somebody, thanks for reading!

bob prohaska

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

Konstantin Belousov
On Wed, Jan 08, 2020 at 03:56:30PM -0800, bob prohaska wrote:

> A Pi3 running FreeBSD 13.0-CURRENT (GENERIC) #4 r356366 reported,
> while compiling www/chromium:
>
> panic: non-current pmap 0xfffffd000385f5a0
> cpuid = 1
> time = 1578525992
> KDB: stack backtrace:
> db_trace_self() at db_trace_self_wrapper+0x28
> pc = 0xffff000000736b9c  lr = 0xffff000000106814
> sp = 0xffff00005a92ae40  fp = 0xffff00005a92b050
>
> db_trace_self_wrapper() at vpanic+0x18c
> pc = 0xffff000000106814  lr = 0xffff0000004088c0
> sp = 0xffff00005a92b060  fp = 0xffff00005a92b110
>
> vpanic() at panic+0x44
> pc = 0xffff0000004088c0  lr = 0xffff000000408670
> sp = 0xffff00005a92b120  fp = 0xffff00005a92b1a0
>
> panic() at pmap_remove_pages+0x8b4
> pc = 0xffff000000408670  lr = 0xffff00000074d890
> sp = 0xffff00005a92b1b0  fp = 0xffff00005a92b270
>
> pmap_remove_pages() at exec_new_vmspace+0x188
> pc = 0xffff00000074d890  lr = 0xffff0000003c1250
> sp = 0xffff00005a92b280  fp = 0xffff00005a92b2d0
>
> exec_new_vmspace() at exec_elf64_imgact+0x744
> pc = 0xffff0000003c1250  lr = 0xffff00000039a358
> sp = 0xffff00005a92b2e0  fp = 0xffff00005a92b3d0
>
> exec_elf64_imgact() at kern_execve+0x458
> pc = 0xffff00000039a358  lr = 0xffff0000003c01b0
> sp = 0xffff00005a92b3e0  fp = 0xffff00005a92b730
>
> kern_execve() at sys_execve+0x54
> pc = 0xffff0000003c01b0  lr = 0xffff0000003bfa2c
> sp = 0xffff00005a92b740  fp = 0xffff00005a92b7b0
>
> sys_execve() at do_el0_sync+0x514
> pc = 0xffff0000003bfa2c  lr = 0xffff000000753c44
> sp = 0xffff00005a92b7c0  fp = 0xffff00005a92b860
>
> do_el0_sync() at handle_el0_sync+0x90
> pc = 0xffff000000753c44  lr = 0xffff000000739224
> sp = 0xffff00005a92b870  fp = 0xffff00005a92b980
>
> handle_el0_sync() at 0x214978
> pc = 0xffff000000739224  lr = 0x0000000000214978
> sp = 0xffff00005a92b990  fp = 0x0000ffffffffd490
>
> KDB: enter: panic
> [ thread pid 32060 tid 100321 ]
> Stopped at      0x40425654
> db> bt
> Tracing pid 32060 tid 100321 td 0xfffffd000ba68000
> db_trace_self() at db_stack_trace+0xf8
>          pc = 0xffff000000736b9c  lr = 0xffff000000103c58
>          sp = 0xffff00005a92aa10  fp = 0xffff00005a92aa40
>
> db_stack_trace() at db_command+0x228
>          pc = 0xffff000000103c58  lr = 0xffff0000001038d0
>          sp = 0xffff00005a92aa50  fp = 0xffff00005a92ab30
>
> db_command() at db_command_loop+0x58
>          pc = 0xffff0000001038d0  lr = 0xffff000000103678
>          sp = 0xffff00005a92ab40  fp = 0xffff00005a92ab60
>
> db_command_loop() at db_trap+0xf4
>          pc = 0xffff000000103678  lr = 0xffff00000010697c
>          sp = 0xffff00005a92ab70  fp = 0xffff00005a92ad90
>
> db_trap() at kdb_trap+0x1d8
>          pc = 0xffff00000010697c  lr = 0xffff00000044fd70
>          sp = 0xffff00005a92ada0  fp = 0xffff00005a92ae50
>        
> kdb_trap() at do_el1h_sync+0xf4
>          pc = 0xffff00000044fd70  lr = 0xffff0000007535b8
>          sp = 0xffff00005a92ae60  fp = 0xffff00005a92ae90
>
> do_el1h_sync() at handle_el1h_sync+0x78
>          pc = 0xffff0000007535b8  lr = 0xffff000000739078
>          sp = 0xffff00005a92aea0  fp = 0xffff00005a92afb0
>
> handle_el1h_sync() at kdb_enter+0x34
>          pc = 0xffff000000739078  lr = 0xffff00000044f3bc
>          sp = 0xffff00005a92afc0  fp = 0xffff00005a92b050
>
> kdb_enter() at vpanic+0x1a8
>          pc = 0xffff00000044f3bc  lr = 0xffff0000004088dc
>          sp = 0xffff00005a92b060  fp = 0xffff00005a92b110
>
> vpanic() at panic+0x44
>          pc = 0xffff0000004088dc  lr = 0xffff000000408670
>          sp = 0xffff00005a92b120  fp = 0xffff00005a92b1a0
>        
> panic() at pmap_remove_pages+0x8b4
>          pc = 0xffff000000408670  lr = 0xffff00000074d890
>          sp = 0xffff00005a92b1b0  fp = 0xffff00005a92b270
>
> pmap_remove_pages() at exec_new_vmspace+0x188
>          pc = 0xffff00000074d890  lr = 0xffff0000003c1250
>          sp = 0xffff00005a92b280  fp = 0xffff00005a92b2d0
>
> exec_new_vmspace() at exec_elf64_imgact+0x744
>          pc = 0xffff0000003c1250  lr = 0xffff00000039a358
>          sp = 0xffff00005a92b2e0  fp = 0xffff00005a92b3d0
>
> exec_elf64_imgact() at kern_execve+0x458
>          pc = 0xffff00000039a358  lr = 0xffff0000003c01b0
>          sp = 0xffff00005a92b3e0  fp = 0xffff00005a92b730
>
> kern_execve() at sys_execve+0x54
>          pc = 0xffff0000003c01b0  lr = 0xffff0000003bfa2c
>          sp = 0xffff00005a92b740  fp = 0xffff00005a92b7b0
>        
> sys_execve() at do_el0_sync+0x514
>          pc = 0xffff0000003bfa2c  lr = 0xffff000000753c44
>          sp = 0xffff00005a92b7c0  fp = 0xffff00005a92b860
>
> do_el0_sync() at handle_el0_sync+0x90
>          pc = 0xffff000000753c44  lr = 0xffff000000739224
>          sp = 0xffff00005a92b870  fp = 0xffff00005a92b980
>
> handle_el0_sync() at 0x214978
>          pc = 0xffff000000739224  lr = 0x0000000000214978
>          sp = 0xffff00005a92b990  fp = 0x0000ffffffffd490
>
> db>

It would be useful to see both the curcpu pc_curpmap content,
and dump both *(struct pmap *)0xfffffd000385f5a0 and *pc_curpmap
from the vmcore.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

bob prohaska
On Thu, Jan 09, 2020 at 01:51:23PM +0200, Konstantin Belousov wrote:
>
> It would be useful to see both the curcpu pc_curpmap content,
> and dump both *(struct pmap *)0xfffffd000385f5a0 and *pc_curpmap
> from the vmcore.

The machine is presently updating to r356529. If the panic recurs,
and somebody can describe how to capture more useful information,
I'd be pleased to do it. References to documentation are welcome,
but, not being a programmer, my ability to understand it is limited.

Thanks for reading!

bob prohaska

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

Ralf Wenk-5
In reply to this post by Konstantin Belousov
Hi,

On 2020-01-09 at 13:51 +0200 Konstantin Belousov wrote:
> On Wed, Jan 08, 2020 at 03:56:30PM -0800, bob prohaska wrote:
> > A Pi3 running FreeBSD 13.0-CURRENT (GENERIC) #4 r356366 reported,
> > while compiling www/chromium:
> >
> > panic: non-current pmap 0xfffffd000385f5a0
> [...]
> > db>

While doing "mergemaster -F -m /usr/src" one of my RPi3s experienced
the same kind of panic:

panic: non-current pmap 0xfffffd0001bca2a0
cpuid = 1
time = 1578921150
KDB: stack backtrace:
db_trace_self() at db_trace_self_wrapper+0x28
         pc = 0xffff000000736b3c  lr = 0xffff000000106814
         sp = 0xffff00005a1782b0  fp = 0xffff00005a1784c0

db_trace_self_wrapper() at vpanic+0x18c
         pc = 0xffff000000106814  lr = 0xffff000000408600
         sp = 0xffff00005a1784d0  fp = 0xffff00005a178580

vpanic() at panic+0x44
         pc = 0xffff000000408600  lr = 0xffff0000004083b0
         sp = 0xffff00005a178590  fp = 0xffff00005a178610

panic() at pmap_remove_pages+0x8b4
         pc = 0xffff0000004083b0  lr = 0xffff00000074d890
         sp = 0xffff00005a178620  fp = 0xffff00005a1786e0

pmap_remove_pages() at vmspace_exit+0xc0
         pc = 0xffff00000074d890  lr = 0xffff0000006d3710
         sp = 0xffff00005a1786f0  fp = 0xffff00005a178720

vmspace_exit() at exit1+0x4f8
         pc = 0xffff0000006d3710  lr = 0xffff0000003c2ab4
         sp = 0xffff00005a178730  fp = 0xffff00005a1787a0

exit1() at sys_sys_exit+0x10
         pc = 0xffff0000003c2ab4  lr = 0xffff0000003c25b8
         sp = 0xffff00005a1787b0  fp = 0xffff00005a1787b0

sys_sys_exit() at do_el0_sync+0x514
         pc = 0xffff0000003c25b8  lr = 0xffff000000753c44
         sp = 0xffff00005a1787c0  fp = 0xffff00005a178860

do_el0_sync() at handle_el0_sync+0x90
         pc = 0xffff000000753c44  lr = 0xffff000000739224
         sp = 0xffff00005a178870  fp = 0xffff00005a178980

handle_el0_sync() at 0x403ef6bc
         pc = 0xffff000000739224  lr = 0x00000000403ef6bc
         sp = 0xffff00005a178990  fp = 0x0000ffffffffd7c0

KDB: enter: panic
[ thread pid 44425 tid 100460 ]
Stopped at      0x4040ddfc
db>
db> bt
Tracing pid 44425 tid 100460 td 0xfffffd000fff5560
db_trace_self() at db_stack_trace+0xf8
         pc = 0xffff000000736b3c  lr = 0xffff000000103c58
         sp = 0xffff00005a177e80  fp = 0xffff00005a177eb0

db_stack_trace() at db_command+0x228
         pc = 0xffff000000103c58  lr = 0xffff0000001038d0
         sp = 0xffff00005a177ec0  fp = 0xffff00005a177fa0

db_command() at db_command_loop+0x58
         pc = 0xffff0000001038d0  lr = 0xffff000000103678
         sp = 0xffff00005a177fb0  fp = 0xffff00005a177fd0

db_command_loop() at db_trap+0xf4
         pc = 0xffff000000103678  lr = 0xffff00000010697c
         sp = 0xffff00005a177fe0  fp = 0xffff00005a178200

db_trap() at kdb_trap+0x1d8
         pc = 0xffff00000010697c  lr = 0xffff00000044fa74
         sp = 0xffff00005a178210  fp = 0xffff00005a1782c0

kdb_trap() at do_el1h_sync+0xf4
         pc = 0xffff00000044fa74  lr = 0xffff0000007535b8
         sp = 0xffff00005a1782d0  fp = 0xffff00005a178300

do_el1h_sync() at handle_el1h_sync+0x78
         pc = 0xffff0000007535b8  lr = 0xffff000000739078
         sp = 0xffff00005a178310  fp = 0xffff00005a178420

handle_el1h_sync() at kdb_enter+0x34
         pc = 0xffff000000739078  lr = 0xffff00000044f0c0
         sp = 0xffff00005a178430  fp = 0xffff00005a1784c0

kdb_enter() at vpanic+0x1a8
         pc = 0xffff00000044f0c0  lr = 0xffff00000040861c
         sp = 0xffff00005a1784d0  fp = 0xffff00005a178580

vpanic() at panic+0x44
         pc = 0xffff00000040861c  lr = 0xffff0000004083b0
         sp = 0xffff00005a178590  fp = 0xffff00005a178610

panic() at pmap_remove_pages+0x8b4
         pc = 0xffff0000004083b0  lr = 0xffff00000074d890
         sp = 0xffff00005a178620  fp = 0xffff00005a1786e0

pmap_remove_pages() at vmspace_exit+0xc0
         pc = 0xffff00000074d890  lr = 0xffff0000006d3710
         sp = 0xffff00005a1786f0  fp = 0xffff00005a178720

vmspace_exit() at exit1+0x4f8
         pc = 0xffff0000006d3710  lr = 0xffff0000003c2ab4
         sp = 0xffff00005a178730  fp = 0xffff00005a1787a0

exit1() at sys_sys_exit+0x10
         pc = 0xffff0000003c2ab4  lr = 0xffff0000003c25b8
         sp = 0xffff00005a1787b0  fp = 0xffff00005a1787b0

sys_sys_exit() at do_el0_sync+0x514
         pc = 0xffff0000003c25b8  lr = 0xffff000000753c44
         sp = 0xffff00005a1787c0  fp = 0xffff00005a178860

do_el0_sync() at handle_el0_sync+0x90

         pc = 0xffff000000753c44  lr = 0xffff000000739224
         sp = 0xffff00005a178870  fp = 0xffff00005a178980

handle_el0_sync() at 0x403ef6bc
         pc = 0xffff000000739224  lr = 0x00000000403ef6bc
         sp = 0xffff00005a178990  fp = 0x0000ffffffffd7c0

db>
db> show pcpu
cpuid        = 1
dynamic pcpu = 0x3fea1f00
curthread    = 0xfffffd000fff5560: pid 44425 tid 100460 critnest 1 "install"
curpcb       = 0xffff00005a178aa0
fpcurthread  = 0xfffffd000fff5560: pid 44425 "install"
idlethread   = 0xfffffd0000aaa560: tid 100004 "idle: cpu1"
curvnet      = 0
spin locks held:
db>

> It would be useful to see both the curcpu pc_curpmap content,
> and dump both *(struct pmap *)0xfffffd000385f5a0 and *pc_curpmap
> from the vmcore.

I do not know the exact kernel debugger commands to print/show the
"curcpu pc_curpmap content" or dump "*(struct pmap *)0xfffffd0001bca2a0"
and "*pc_curpmap" to help, but this RPi3 can stay in the kernel debugger
for a while.

If you can tell me the neccessary kernel debugger commands I will execute
them and mail the results.

As there is no swap-space configured, I think I am unable to procude a
kernel coredump which could be analysed later.

The panic happend during an update to a CURRENT of today, but I saved
former state in a boot environment.

Ralf

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

bob prohaska
On Mon, Jan 13, 2020 at 08:16:23AM -0800, bob prohaska wrote:
> Hi Ralf,
>
> On Mon, Jan 13, 2020 at 03:40:29PM +0100, Ralf Wenk wrote:
> >
> > While doing "mergemaster -F -m /usr/src" one of my RPi3s experienced
> > the same kind of panic:
> >
> Mine has been updated to r356617 and has been compiling www/chromium
> since then. So far, so good, no more crashes. It might be fixed.

Happened again running r356617 during buildworld of r357054. It got well
into the building libraries phase before failing.

I'll start over, but this time just try buildkernel et al.

Thanks for reading,

bob prohaska
 
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

bob prohaska
In reply to this post by bob prohaska
On Thu, Jan 09, 2020 at 09:23:14AM -0800, bob prohaska wrote:
> On Thu, Jan 09, 2020 at 01:51:23PM +0200, Konstantin Belousov wrote:
> >
> > It would be useful to see both the curcpu pc_curpmap content,
> > and dump both *(struct pmap *)0xfffffd000385f5a0 and *pc_curpmap
> > from the vmcore.

The Pi3 is now up to r362283 and just reported:

panic: non-current pmap 0xfffffd000142d440
cpuid = 0
time = 1593368952
KDB: stack backtrace:
db_trace_self() at db_trace_self_wrapper+0x28
         pc = 0xffff00000075e24c  lr = 0xffff00000010a468
         sp = 0xffff00005a86d2e0  fp = 0xffff00005a86d4e0

db_trace_self_wrapper() at vpanic+0x194
         pc = 0xffff00000010a468  lr = 0xffff000000419dcc
         sp = 0xffff00005a86d4f0  fp = 0xffff00005a86d540

vpanic() at panic+0x44
         pc = 0xffff000000419dcc  lr = 0xffff000000419b74
         sp = 0xffff00005a86d550  fp = 0xffff00005a86d600

panic() at pmap_remove_pages+0x908
         pc = 0xffff000000419b74  lr = 0xffff000000776e00
         sp = 0xffff00005a86d610  fp = 0xffff00005a86d680

pmap_remove_pages() at vmspace_exit+0x104
         pc = 0xffff000000776e00  lr = 0xffff0000006f7024
         sp = 0xffff00005a86d690  fp = 0xffff00005a86d6e0

vmspace_exit() at exit1+0x48c
         pc = 0xffff0000006f7024  lr = 0xffff0000003d13fc
         sp = 0xffff00005a86d6f0  fp = 0xffff00005a86d750

exit1() at sys_sys_exit+0x10
         pc = 0xffff0000003d13fc  lr = 0xffff0000003d0f6c
         sp = 0xffff00005a86d760  fp = 0xffff00005a86d7b0

sys_sys_exit() at do_el0_sync+0x3f8
         pc = 0xffff0000003d0f6c  lr = 0xffff00000077dac8
         sp = 0xffff00005a86d7c0  fp = 0xffff00005a86d830

do_el0_sync() at handle_el0_sync+0x90
         pc = 0xffff00000077dac8  lr = 0xffff000000760a24
         sp = 0xffff00005a86d840  fp = 0xffff00005a86d980

handle_el0_sync() at 0x404bd678
         pc = 0xffff000000760a24  lr = 0x00000000404bd678
         sp = 0xffff00005a86d990  fp = 0x0000ffffffffe960

KDB: enter: panic
[ thread pid 42572 tid 100137 ]
Stopped at      0x4053fcfc
db>


This time it was in the early stages of compiling www/chromium.
Boot and root are from a mechanical hard disk, the dying top page
was:

last pid: 42562;  load averages:  1.40,  1.37,  1.38                      up 8+22:05:11  11:29:10
47 processes:  3 running, 44 sleeping
CPU: 27.1% user,  0.0% nice, 11.4% system,  0.4% interrupt, 61.0% idle
Mem: 92M Active, 237M Inact, 1468K Laundry, 158M Wired, 77M Buf, 415M Free
Swap: 6042M Total, 194M Used, 5849M Free, 3% Inuse
packet_write_wait: Connection to 50.1.20.28 port 22: Broken pipe
bob@raspberrypi:~ $ R PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
42514 root          1  88    0   111M    63M CPU2     2   0:08 100.21% c++
81775 bob           1  52    0    13M   352K wait     0   9:50   0.35% sh
29366 bob           1  20    0    14M  1340K CPU0     0   3:00   0.22% top
29351 bob           1  20    0    20M   936K select   2   0:15   0.03% sshd
  639 root          1  20    0    13M   972K select   3   0:28   0.01% syslogd
30908 root          1  52    0   194M    40M select   1   1:52   0.00% ninja
46086 bob           1  20    0    20M   312K select   0   1:48   0.00% sshd
......

I'll  update OS sources and try again, if somebody can tell me
how to capture more useful information I'll try that.

Thanks for reading,

bob prohaska

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

freebsd-arm mailing list
On 2020-Jun-28, at 12:50, bob prohaska <fbsd at www.zefox.net> wrote:

> On Thu, Jan 09, 2020 at 09:23:14AM -0800, bob prohaska wrote:
>> On Thu, Jan 09, 2020 at 01:51:23PM +0200, Konstantin Belousov wrote:
>>>
>>> It would be useful to see both the curcpu pc_curpmap content,
>>> and dump both *(struct pmap *)0xfffffd000385f5a0 and *pc_curpmap
>>> from the vmcore.
>
> The Pi3 is now up to r362283 and just reported:
>
> panic: non-current pmap 0xfffffd000142d440
> cpuid = 0
> time = 1593368952
> KDB: stack backtrace:
> db_trace_self() at db_trace_self_wrapper+0x28
> pc = 0xffff00000075e24c  lr = 0xffff00000010a468
> sp = 0xffff00005a86d2e0  fp = 0xffff00005a86d4e0
>
> db_trace_self_wrapper() at vpanic+0x194
> pc = 0xffff00000010a468  lr = 0xffff000000419dcc
> sp = 0xffff00005a86d4f0  fp = 0xffff00005a86d540
>
> vpanic() at panic+0x44
> pc = 0xffff000000419dcc  lr = 0xffff000000419b74
> sp = 0xffff00005a86d550  fp = 0xffff00005a86d600
>
> panic() at pmap_remove_pages+0x908
> pc = 0xffff000000419b74  lr = 0xffff000000776e00
> sp = 0xffff00005a86d610  fp = 0xffff00005a86d680
>
> pmap_remove_pages() at vmspace_exit+0x104
> pc = 0xffff000000776e00  lr = 0xffff0000006f7024
> sp = 0xffff00005a86d690  fp = 0xffff00005a86d6e0
>
> vmspace_exit() at exit1+0x48c
> pc = 0xffff0000006f7024  lr = 0xffff0000003d13fc
> sp = 0xffff00005a86d6f0  fp = 0xffff00005a86d750
>
> exit1() at sys_sys_exit+0x10
> pc = 0xffff0000003d13fc  lr = 0xffff0000003d0f6c
> sp = 0xffff00005a86d760  fp = 0xffff00005a86d7b0
>
> sys_sys_exit() at do_el0_sync+0x3f8
> pc = 0xffff0000003d0f6c  lr = 0xffff00000077dac8
> sp = 0xffff00005a86d7c0  fp = 0xffff00005a86d830
>
> do_el0_sync() at handle_el0_sync+0x90
> pc = 0xffff00000077dac8  lr = 0xffff000000760a24
> sp = 0xffff00005a86d840  fp = 0xffff00005a86d980
>
> handle_el0_sync() at 0x404bd678
> pc = 0xffff000000760a24  lr = 0x00000000404bd678
> sp = 0xffff00005a86d990  fp = 0x0000ffffffffe960
>
> KDB: enter: panic
> [ thread pid 42572 tid 100137 ]
> Stopped at      0x4053fcfc
> db>
>
>
> This time it was in the early stages of compiling www/chromium.
> Boot and root are from a mechanical hard disk, the dying top page
> was:
>
> last pid: 42562;  load averages:  1.40,  1.37,  1.38                      up 8+22:05:11  11:29:10
> 47 processes:  3 running, 44 sleeping
> CPU: 27.1% user,  0.0% nice, 11.4% system,  0.4% interrupt, 61.0% idle
> Mem: 92M Active, 237M Inact, 1468K Laundry, 158M Wired, 77M Buf, 415M Free
> Swap: 6042M Total, 194M Used, 5849M Free, 3% Inuse
> packet_write_wait: Connection to 50.1.20.28 port 22: Broken pipe
> bob@raspberrypi:~ $ R PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
> 42514 root          1  88    0   111M    63M CPU2     2   0:08 100.21% c++
> 81775 bob           1  52    0    13M   352K wait     0   9:50   0.35% sh
> 29366 bob           1  20    0    14M  1340K CPU0     0   3:00   0.22% top
> 29351 bob           1  20    0    20M   936K select   2   0:15   0.03% sshd
>  639 root          1  20    0    13M   972K select   3   0:28   0.01% syslogd
> 30908 root          1  52    0   194M    40M select   1   1:52   0.00% ninja
> 46086 bob           1  20    0    20M   312K select   0   1:48   0.00% sshd
> ......
>
> I'll  update OS sources and try again, if somebody can tell me
> how to capture more useful information I'll try that.

Do you have your system set up to allow it to
dump to the swap/paging space for panics and
then to put a copy in the /var/crash/ area during
the next boot? Konstantin B. was asking for
information from such a dump.

Note: A dump can be requested at the db> prompt
by typing a "dump" command at the prompt, if you
have set up to have a dump target identified,
usually a swap/paging partition. If it works, the
next boot would take some time putting material
into /var/crash.

An example of such materials in /var/crash/ is
(2 dumps):

# ls -ldT /var/crash/*
-rw-r--r--  1 root  wheel        2 Jun 16 19:58:17 2020 /var/crash/bounds
-rw-r--r--  1 root  wheel    32484 Jun 11 20:34:35 2020 /var/crash/core.txt.3
-rw-r--r--  1 root  wheel    32498 Jun 16 19:58:47 2020 /var/crash/core.txt.4
-rw-------  1 root  wheel      561 Jun 11 20:34:04 2020 /var/crash/info.3
-rw-------  1 root  wheel      562 Jun 16 19:58:17 2020 /var/crash/info.4
lrwxr-xr-x  1 root  wheel        6 Jun 16 19:58:17 2020 /var/crash/info.last -> info.4
-rw-r--r--  1 root  wheel        5 Feb 22 02:37:33 2016 /var/crash/minfree
-rw-------  1 root  wheel  9424896 Jun 11 20:34:04 2020 /var/crash/vmcore.3
-rw-------  1 root  wheel  9424896 Jun 16 19:58:17 2020 /var/crash/vmcore.4
lrwxr-xr-x  1 root  wheel        8 Jun 16 19:58:17 2020 /var/crash/vmcore.last -> vmcore.4

Do you have devel/gdb installed? It supplies a
/usr/local/bin/kgdb for looking at such vmcore.*
files.

It is important that the kernel debug information
still match the vmcore as I understand, even if
that means needing to boot a different,
sufficiently-working kernel that does not match the
debug information in order to get the /var/crash
materials in place and to inspect them.

I'm not sure you could do as Konstantin requested
based on a non-debug kernel build done the usual
way, even with debug information present.

Are you using a non-debug kernel? A debug-kernel?
You might need to try reproducing with a debug
kernel. (But that likely will make builds
take longer.)

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

bob prohaska
On Sun, Jun 28, 2020 at 05:21:33PM -0700, Mark Millard wrote:

> On 2020-Jun-28, at 12:50, bob prohaska <fbsd at www.zefox.net> wrote:
>
> > I'll  update OS sources and try again, if somebody can tell me
> > how to capture more useful information I'll try that.
>
> Do you have your system set up to allow it to
> dump to the swap/paging space for panics and
> then to put a copy in the /var/crash/ area during
> the next boot? Konstantin B. was asking for
> information from such a dump.
>
Ahh, that might be a little beyond my skill level....
The system is basically default -current, no kernel
options. In fact, I'm no longer sure where to put them
when using buildkernel.
 
> Note: A dump can be requested at the db> prompt
> by typing a "dump" command at the prompt, if you
> have set up to have a dump target identified,
> usually a swap/paging partition. If it works, the
> next boot would take some time putting material
> into /var/crash.
>
I'll try next time it happens. Looks like the system
defaults turn on dumpdev and savecore.
 
> Do you have devel/gdb installed? It supplies a
> /usr/local/bin/kgdb for looking at such vmcore.*
> files.

Not yet.

>
> It is important that the kernel debug information
> still match the vmcore as I understand, even if
> that means needing to boot a different,
> sufficiently-working kernel that does not match the
> debug information in order to get the /var/crash
> materials in place and to inspect them.
>
> I'm not sure you could do as Konstantin requested
> based on a non-debug kernel build done the usual
> way, even with debug information present.
>
One step at a time..... 8-)
 
> Are you using a non-debug kernel? A debug-kernel?

The embarrasing truth is I don't know. Whatever is
default in -current for the Pi3.

> You might need to try reproducing with a debug
> kernel. (But that likely will make builds
> take longer.)
>
Any sense of how much longer?  A chromium build,
when it worked, took a week.

Thanks for writing!

bob prohaska

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366

freebsd-arm mailing list
On 2020-Jun-28, at 18:11, bob prohaska <fbsd at www.zefox.net> wrote:

> On Sun, Jun 28, 2020 at 05:21:33PM -0700, Mark Millard wrote:
>> On 2020-Jun-28, at 12:50, bob prohaska <fbsd at www.zefox.net> wrote:
>>
>>> I'll  update OS sources and try again, if somebody can tell me
>>> how to capture more useful information I'll try that.
>>
>> Do you have your system set up to allow it to
>> dump to the swap/paging space for panics and
>> then to put a copy in the /var/crash/ area during
>> the next boot? Konstantin B. was asking for
>> information from such a dump.
>>
> Ahh, that might be a little beyond my skill level....
> The system is basically default -current, no kernel
> options. In fact, I'm no longer sure where to put them
> when using buildkernel.

Actually, head (i.e, current) by default builds a debug
kernel. stable and release do not.

(My builds usually have the debug disabled, but I
explicitly cause that.)

>> Note: A dump can be requested at the db> prompt
>> by typing a "dump" command at the prompt, if you
>> have set up to have a dump target identified,
>> usually a swap/paging partition. If it works, the
>> next boot would take some time putting material
>> into /var/crash.
>>
> I'll try next time it happens. Looks like the system
> defaults turn on dumpdev and savecore.

Your swap partition where dumpdev points needs to
be big enough. From past experience, your likely
are.

>> Do you have devel/gdb installed? It supplies a
>> /usr/local/bin/kgdb for looking at such vmcore.*
>> files.
>
> Not yet.

These days head based builds do not provide their
own kgdb support. It can be a good idea to have
devel/gdb installed even if you do not normally
use it. It gets messy if something goes wrong
such that building and installing devel/gdb ends
up then being a difficulty after the fact.

>>
>> It is important that the kernel debug information
>> still match the vmcore as I understand, even if
>> that means needing to boot a different,
>> sufficiently-working kernel that does not match the
>> debug information in order to get the /var/crash
>> materials in place and to inspect them.
>>
>> I'm not sure you could do as Konstantin requested
>> based on a non-debug kernel build done the usual
>> way, even with debug information present.
>>
> One step at a time..... 8-)

Looks like you have a debug kernel build, likely with
the debug information. And your problem does not
prevent you from booting the same kernel that panicked
and using the system after that.

>> Are you using a non-debug kernel? A debug-kernel?
>
> The embarrasing truth is I don't know. Whatever is
> default in -current for the Pi3.

Debug-kernel.

>> You might need to try reproducing with a debug
>> kernel. (But that likely will make builds
>> take longer.)
>>
> Any sense of how much longer?  A chromium build,
> when it worked, took a week.

You have already seen the time frames because
you have been using the debug kernel.

If you have noticed stable being faster then
head, the kernel being non-debug for stable
by default vs. debug for head by default is
likely part of the issue. Witness checks,
asserts, etc. take extra time.


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arm
To unsubscribe, send any mail to "[hidden email]"