NetBSD Problem Report #50375
From www@NetBSD.org Wed Oct 28 15:43:49 2015
Return-Path: <www@NetBSD.org>
Received: from mail.netbsd.org (mail.netbsd.org [149.20.53.66])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(Client CN "mail.netbsd.org", Issuer "Postmaster NetBSD.org" (verified OK))
by mollari.NetBSD.org (Postfix) with ESMTPS id A0D81A5674
for <gnats-bugs@gnats.NetBSD.org>; Wed, 28 Oct 2015 15:43:49 +0000 (UTC)
Message-Id: <20151028154347.C5BC8A65B7@mollari.NetBSD.org>
Date: Wed, 28 Oct 2015 15:43:47 +0000 (UTC)
From: riz@NetBSD.org
Reply-To: riz@NetBSD.org
To: gnats-bugs@NetBSD.org
Subject: layerfs (nullfs) locking problem leading to livelock
X-Send-Pr-Version: www-1.0
>Number: 50375
>Category: kern
>Synopsis: layerfs (nullfs) locking problem leading to livelock
>Confidential: no
>Severity: critical
>Priority: high
>Responsible: hannken
>State: closed
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Wed Oct 28 15:45:00 +0000 2015
>Closed-Date: Wed Jan 27 08:55:09 +0000 2016
>Last-Modified: Wed Jan 27 08:55:09 +0000 2016
>Originator: Jeff Rizzo
>Release: 7.99.21/evbarm
>Organization:
>Environment:
NetBSD jetson1.lan 7.99.21 NetBSD 7.99.21 (JETSONTK1) #9: Thu Oct 15 14:36:15 PDT 2015 riz@cassava.tastylime.net:/scratch/evbarm7/obj/sys/arch/evbarm/compile/JETSONTK1 evbarm
>Description:
Doing pbulk builds with nullfs mounts in chroots on a 4-core ARM (tegra tk1) system, I very frequently see a problem where it stops making progress, and a bunch of processes get stuck in 'tstile'. This time I happened to notice that process 28575 was the first to enter tstile. (see below)
When it does this, I can use crash and ddb to get info, and gdb against /dev/mem seems to work somewhat (not "info threads", though), but I have not been able to get a crash dump.
My interpretation of the debugging I got below is that the "culprit" process is PID 14346, which was:
polkit 14346 0.0 0.1 4936 2552 ? D 7:15AM 0:00.50 /usr/bi 1001 14346 26273 34233 125 0 4936 2552 vnode D ? 0:00.50 /usr/bin/make _MAKE OPSYS OS_VERSION LOWER_OPSYS _PKGSRCDIR PKGTOOLS_VERSION _CC _PATH_ORIG _PKGSRC_BARRIER ALLOW_VULNERABLE_PACKAGES all
My understanding is that the next step would be to look at the individual frames of the backtrace of that process to figure out what vp is - I would appreciate suggestions for how to do this with the system live, using either ddb or gdb against /dev/mem. (Assume I don't know what I'm doing, and give me very specific instructions :)
crash> ps/l |grep tstile
29934 1 3 3 0 96f83460 sh tstile
23822 1 3 1 0 9357ce20 sh tstile
28524 1 3 0 0 93dea080 sh tstile
21780 1 3 3 0 93983120 sh tstile
28575 1 3 3 0 96f831a0 python3.4 tstile
2319 1 3 0 0 92fec960 gvfsd-trash tstile
0 67 3 2 200 91c733e0 ioflush tstile
0 9 3 0 200 91596840 vdrain tstile
crash> bt/a 96f831a0
trace: pid 28575 lid 1 at 0xa1f57aa4
0xa1f57aa4: mi_switch+0x10
0xa1f57ad4: sleepq_block+0xb4
0xa1f57b14: turnstile_block+0x318
0xa1f57b8c: rw_vector_enter+0x3c0
0xa1f57bbc: genfs_lock+0x68
0xa1f57be4: VOP_LOCK+0x40
0xa1f57c0c: layer_lock+0x44
0xa1f57c34: VOP_LOCK+0x40
0xa1f57c5c: vn_lock+0x88
0xa1f57cac: lookup_once+0x224
0xa1f57d7c: namei_tryemulroot+0x528
0xa1f57db4: namei+0ameiat.isra.0+0x64
0xa1f57e4c: do_sys_statat+0x84
0xa1f57f04: sys___stat50+0x2c
0xa1f57f7c: syscall+0xb8
0xa1f57fac: swi_handler+0xa0
crash> ps/w |grep tstile
29934 1 sh netbsd 27 tstile 922b78e4
23822 1 sh netbsd 27 tstile 922b78e4
28524 1 sh netbsd 27 tstile 935fb98c
21780 1 sh netbsd 27 tstile 951f781c
28575 1 python3.4 netbsd 27 tstile 92b1d834
2319 1 gvfsd-trash netbsd 43 tstile 922b78e4
0 67 system netbsd 124 tstile 951f781c
0 9 system netbsd 125 tstile 951f781c
db{3}> show lock 92b1d834
lock address : 0x0000000092b1d834 type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 1
current cpu : 3 last held: 2
current lwp : 0x00000000915c10c0 last held: 0x0000000093450300
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x0000000093450300 flags : 0x0000000000000007
Turnstile chain at 0x81609eb0.
=> Turnstile at 0x9706bd90 (wrq=0x9706bda0, rdq=0x9706bda8).
=> 0 waiting readers:
=> 1 waiting writers: 0x96f831a0
db{3}> bt/a 0x0000000093450300
trace: pid 14346 lid 1 at 0x9d6218c4
0x9d6218c4: netbsd:mi_switch+0x10
0x9d6218f4: netbsd:sleepq_block+0xb4
0x9d62192c: netbsd:cv_wait+0x130
0x9d621954: netbsd:vwait+0x50
0x9d62197c: netbsd:vget+0xd4
0x9d6219e4: netbsd:vcache_get+0x158
0x9d621a14: netbsd:layer_node_create+0x2c
0x9d621a44: netbsd:layer_lookup+0xfc
0x9d621a7c: netbsd:VOP_LOOKUP+0x48
0x9d621bdc: netbsd:getcwd_common+0x258
0x9d621bfc: netbsd:vn_isunder+0x2c
0x9d621c4c: netbsd:lookup_once+0xfc
0x9d621d1c: netbsd:namei_tryemulroot+0x528
0x9d621d54: netbsd:namei+0x34
0x9d621e2c: netbsd:vn_open+0x94
0x9d621eac: netbsd:do_open+0xb0
0x9d621edc: netbsd:do_sys_openat+0x7c
0x9d621f04: netbsd:sys_open+0x38
0x9d621f7c: netbsd:syscall+0xb8
0x9d621fac: netbsd:swi_handler+0xa0
db{3}>
db{3}> show lock 922b78e4
lock address : 0x00000000922b78e4 type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 3
current cpu : 3 last held: 0
current lwp : 0x00000000915c10c0 last held: 0x0000000093dea080
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x0000000093dea080 flags : 0x0000000000000007
Turnstile chain at 0x81609f60.
=> Turnstile at 0x9706b6c8 (wrq=0x9706b6d8, rdq=0x9706b6e0).
=> 0 waiting readers:
=> 3 waiting writers: 0x92fec960 0x9357ce20 0x96f83460
db{3}> show lock 935fb98c
lock address : 0x00000000935fb98c type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 1
current cpu : 3 last held: 3
current lwc10c0 last held: 0x0000000093983120
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x0000000093983120 flags : 0x0000000000000007
Turnstile chain at 0x8160a008.
=> Turnstile at 0x9706afc8 (wrq=0x9706afd8, rdq=0x9706afe0).
=> 0 waiting readers:
=> 1 waiting writers: 0x93dea080
db{3}> show lock 951f781c
lock address : 0x00000000951f781c type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 3
current cpu : 3 last held: 3
current lwp : 0x00000000915c10c0 last held: 0x0000000096f831a0
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x0000000096f831a0 flags : 0x0000000000000007
Turnstile chain at 0x81609e98.
=> Turnstile at 0x9706af90 (wrq=0x9706afa0, rdq=0x9706afa8).
=> 0 waiting readers:
=> 3 waiting writers: 0x91596840 0x91c733e0 0x93983120
db{3}>
db{3}> bt/a 0x0000000093dea080
trace: pid 28524 lid 1 at 0xa4aa7aa4
0xa4aa7aa4: netbsd:mi_switch+0x10
0xa4aa7ad4: netbsd:sleepq_block+0xb4
0xa4aa7b14: netbsd:turnstile_block+0x318
0xa4aa7b8c: netbsd:rw_enter+0x3c0
0xa4aa7bbc: netbsd:genfs_lock+0x68
0xa4aa7be4: netbsd:VOP_LOCK+0x40
0xa4aa7c0c: netbsd:layer_lock+0x44
0xa4aa7c34: netbsd:VOP_LOCK+0x40
0xa4aa7c5c: netbsd:vn_lock+0x88
0xa4aa7cac: netbsd:lookup_once+0x224
0xa4aa7d7c: netbsd:namei_tryemulroot+0x528
0xa4aa7db4: netbsd:namei+0x34
0xa4aa7ddc: netbsd:fd_nameiat.isra.0+0x64
0xa4aa7e4c: netbsd:do_sys_statat+0x84
0xa4aa7f04: netbsd:sys___stat50+0x2c
0xa4aa7f7c: netbsd:syscall+0xb8
0xa4aa7fac: netbsd:swi_handler+0xa0
db{3}> bt/a 0x0000000093983120
trace: pid 21780 lid 1 at 0x9ec71aa4
0x9ec71aa4: netbsd:mi_switch+0x10
0x9ec71ad4: netbsd:sleepq_block+0xb4
0x9ec71b14: netbsd:turnstile_block+0x318
0x9ec71b8c: netbsd:rw_enter+0x3c0
0x9ec71bbc: netbsd:genfs_lock+0x68
0x9ec71be4: netbsd:VOP_LOCK+0x40
0x9ec71c0c: netbsd:layer_lock+0x44
0x0x40
0x9ec71c5c: netbsd:vn_lock+0x88
0x9ec71cac: netbsd:lookup_once+0x224
0x9ec71d7c: netbsd:namei_tryemulroot+0x528
0x9ec71db4: netbsd:namei+0x34
0x9ec71ddc: netbsd:fd_nameiat.isra.0+0x64
0x9ec71e4c: netbsd:do_sys_statat+0x84
0x9ec71f04: netbsd:sys___stat50+0x2c
0x9ec71f7c: netbsd:syscall+0xb8
0x9ec71fac: netbsd:swi_handler+0xa0
db{3}> bt/a 0x0000000096f831a0
trace: pid 28575 lid 1 at 0xa1f57aa4
0xa1f57aa4: netbsd:mi_switch+0x10
0xa1f57ad4: netbsd:sleepq_block+0xb4
0xa1f57b14: netbsd:turnstile_block+0x318
0xa1f57b8c: netbsd:rw_enter+0x3c0
0xa1f57bbc: netbsd:genfs_lock+0x68
0xa1f57be4: netbsd:VOP_LOCK+0x40
0xa1f57c0c: netbsd:layer_lock+0x44
0xa1f57c34: netbsd:VOP_LOCK+0x40
0xa1f57c5c: netbsd:vn_lock+0x88
0xa1f57cac: netbsd:lookup_once+0x224
0xa1f57d7c: netbsd:namei_tryemulroot+0x528
0xa1f57db4: netbsd:namei+0x34
0xa1f57ddc: netbsd:fd_nameiat.isra.0+0x64
0xa1f57e4c: netbsd:do_sys_statat+0x84
0xa1f57f04: netbsd:sys___stat50+0x2c
0xa1f57f7c: netbsd:syscall+0xb8
0xa1f57fac: netbsd:swi_handler+0xa0
db{3}> bt/a 96f83460
trace: pid 29934 lid 1 at 0x9e277a0c
0x9e277a0c: netbsd:mi_switch+0x10
0x9e277a3c: netbsd:sleepq_block+0xb4
0x9e277a7c: netbsd:turnstile_block+0x318
0x9e277af4: netbsd:rw_enter+0x3c0
0x9e277b24: netbsd:genfs_lock+0x68
0x9e277b4c: netbsd:VOP_LOCK+0x40
0x9e277b74: netbsd:layer_lock+0x44
0x9e277b9c: netbsd:VOP_LOCK+0x68
0x9e277bc4: netbsd:vn_lock+0x88
0x9e277bdc: netbsd:layerfs_root+0x38
0x9e277bfc: netbsd:VFS_ROOT+0x30
0x9e277c4c: netbsd:lookup_once+0x29c
0x9e277d1c: netbsd:namei_tryemulroot+0x528
0x9e277d54: netbsd:namei+0x34
0x9e277e2c: netbsd:vn_open+0x94
0x9e277eac: netbsd:do_open+0xb0
0x9e277edc: netbsd:do_sys_openat+0x7c
0x9e277f04: netbsd:sys_open+0x38
0x9e277f7c: netbsd:syscall+0xb8
0x9e277fac: netbsd:swi_handler+0xa0
db{3}> bt/a 9357ce20
trace: pid 23822 lid 1 at 0x9ce49a6c
0x9ce49a6c: netbsd:mi_switch+0x10
0x9ce49a9c: netbsd:sleepq_block+0xb4
0x9ce49adc: netbsd:turnstile_block+0x318
0x9ce49b54: netbsd:rw_enter+0x3c0
0x9ce49b84: netbsd:genfs_lock+0x68
00x40
0x9ce49bd4: netbsd:layer_lock+0x44
0x9ce49bfc: netbsd:VOP_LOCK+0x68
0x9ce49c24: netbsd:vn_lock+0x88
0x9ce49c3c: netbsd:layerfs_root+0x38
0x9ce49c5c: netbsd:VFS_ROOT+0x30
0x9ce49cac: netbsd:lookup_once+0x29c
0x9ce49d7c: netbsd:namei_tryemulroot+0x528
0x9ce49db4: netbsd:namei+0x34
0x9ce49ddc: netbsd:fd_nameiat.isra.0+0x64
0x9ce49e4c: netbsd:do_sys_statat+0x84
0x9ce49f04: netbsd:sys___stat50+0x2c
0x9ce49f7c: netbsd:syscall+0xb8
0x9ce49fac: netbsd:swi_handler+0xa0
db{3}> bt/a 92fec960
trace: pid 2319 lid 1 at 0x9d483a0c
0x9d483a0c: netbsd:mi_switch+0x10
0x9d483a3c: netbsd:sleepq_block+0xb4
0x9d483a7c: netbsd:turnstile_block+0x318
0x9d483af4: netbsd:rw_enter+0x3c0
0x9d483b24: netbsd:genfs_lock+0x68
0x9d483b4c: netbsd:VOP_LOCK+0x40
0x9d483b74: netbsd:layer_lock+0x44
0x9d483b9c: netbsd:VOP_LOCK+0x68
0x9d483bc4: netbsd:vn_lock+0x88
0x9d483bdc: netbsd:layerfs_root+0x38
0x9d483bfc: netbsd:VFS_ROOT+0x30
0x9d483c4c: netbsd:lookup_once+0x29c
0x9d483d1c: netbsd:namei_tryemulroot+0x528
0x9d483d54: netbsd:namei+0x34
0x9d483e2c: netbsd:vn_open+0x94
0x9d483eac: netbsd:do_open+0xb0
0x9d483edc: netbsd:do_sys_openat+0x7c
0x9d483f04: netbsd:sys_open+0x38
0x9d483f7c: netbsd:syscall+0xb8
0x9d483fac: netbsd:swi_handler+0xa0
db{3}> bt/a 91c733e0
trace: pid 0 lid 67 at 0x9aaa9d64
0x9aaa9d64: netbsd:mi_switch+0x10
0x9aaa9d94: netbsd:sleepq_block+0xb4
0x9aaa9dd4: netbsd:turnstile_block+0x318
0x9aaa9e4c: netbsd:rw_enter+0x3c0
0x9aaa9e7c: netbsd:genfs_lock+0x68
0x9aaa9ea4: netbsd:VOP_LOCK+0x40
0x9aaa9ecc: netbsd:vn_lock+0x88
0x9aaa9f2c: netbsd:ffs_sync+0xb0
0x9aaa9f4c: netbsd:VFS_SYNC+0x30
0x9aaa9fac: netbsd:sched_sync+0x27c
db{3}> bt/a 91596840
trace: pid 0 lid 9 at 0x9a825d74
0x9a825d74: netbsd:mi_switch+0x10
0x9a825da4: netbsd:sleepq_block+0xb4
0x9a825de4: netbsd:turnstile_block+0x318
0x9a825e5c: netbsd:rw_enter+0x3c0
0x9a825e8c: netbsd:genfs_lock+0x68
0x9a825eb4: netbsd:VOP_LOCK+0x40
0x9a825edc: netbsd:layer_lock+0x44
0x9a825f04: netbsd:VOP_LOCK+0x40
0x9a825f2c: netbsd:vn_lock+0x88
0x9a825f5c: netbsd:vclean+d:cleanvnode+0xf4
0x9a825fac: netbsd:vdrain_thread+0x68
db{3}>
>How-To-Repeat:
Build pbulk packages on top of layerfs
>Fix:
>Release-Note:
>Audit-Trail:
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Jeff Rizzo <riz@NetBSD.org>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Wed, 28 Oct 2015 17:26:38 +0100
> My understanding is that the next step would be to look at the =
individual frames of the backtrace of that process to figure out what vp =
is - I would appreciate suggestions for how to do this with the system =
live, using either ddb or gdb against /dev/mem. (Assume I don't know =
what I'm doing, and give me very specific instructions :)
So you have the tstile threads lwp address (either from crash or ddb or =
ps).
Running gdb with netbsd.gdb against /dev/mem
gdb netbsd.gdb
target kvm /dev/mem
you should be able to get a backtrace with arguments with
kvm proc 0x96f831a0
bt
These backtraces for all threads in tstile should help.
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Wed, 28 Oct 2015 11:48:26 -0700
I got it to happen again; the system is still running for now (in case
more info is needed).
I used gdb to get some more information about *vp for some of the lwps.
crash> ps/l |grep tstile
17838 1 3 0 0 94eed4a0 libtool-wrapper tstile
7793 1 3 3 0 95b74be0 sh tstile
24091 1 3 2 0 95c6a940 libtool-wrapper tstile
16789 1 3 2 0 95369360 sh tstile
28195 1 3 0 0 95893960 libtool-wrapper tstile
18132 1 3 1 0 958628c0 sh tstile
25124 1 3 2 0 95768060 make tstile
0 67 3 0 200 91c37120 ioflush tstile
0 9 3 3 200 91596840 vdrain tstile
crash> ps/w |grep tstile
17838 1 libtool-wrapper netbsd 27 tstile 9222643c
7793 1 sh netbsd 27 tstile 9222643c
24091 1 libtool-wrapper netbsd 27 tstile 9222643c
16789 1 sh netbsd 27 tstile 9222643c
28195 1 libtool-wrapper netbsd 27 tstile 9222643c
18132 1 sh netbsd 27 tstile 92314e9c
25124 1 make netbsd 27 tstile 94c17784
0 67 system netbsd 124 tstile 92314e9c
0 9 system netbsd 125 tstile 92314e9c
db{1}> show lock 9222643c
lock address : 0x000000009222643c type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 5
current cpu : 1 last held: 1
current lwp : 0x00000000915a9360 last held: 0x00000000958628c0
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x00000000958628c0 flags : 0x0000000000000007
Turnstile chain at 0x81609eb8.
=> Turnstile at 0x9512c7b0 (wrq=0x9512c7c0, rdq=0x9512c7c8).
=> 0 waiting readers:
=> 5 waiting writers: 0x95893960 0x95369360 0x95c6a940 0x95b74be0 0x94eed4a0
db{1}> show lock 92314e9c
lock address : 0x0000000092314e9c type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 3
current cpu : 1 last held: 2
current lwp : 0x00000000915a9360 last held: 0x0000000095768060
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x0000000095768060 flags : 0x0000000000000007
Turnstile chain at 0x81609f18.
=> Turnstile at 0x9512d4d0 (wrq=0x9512d4e0, rdq=0x9512d4e8).
=> 0 waiting readers:
=> 3 waiting writers: 0x91596840 0x91c37120 0x958628c0
db{1}> show lock 94c17784
lock address : 0x0000000094c17784 type : sleep/adaptive
initialized : 0x000000008136442c
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 1
current cpu : 1 last held: 1
current lwp : 0x00000000915a9360 last held: 0x0000000095863c00
last locked* : 0x00000000813795f8 unlocked : 0x0000000081379714
owner/count : 0x0000000095863c00 flags : 0x0000000000000007
Turnstile chain at 0x8160a000.
=> Turnstile at 0x91598070 (wrq=0x91598080, rdq=0x91598088).
=> 0 waiting readers:
=> 1 waiting writers: 0x95768060
db{1}>
db{1}> bt/a 0x00000000958628c0
trace: pid 18132 lid 1 at 0xa3e9daa4
0xa3e9daa4: netbsd:mi_switch+0x10
0xa3e9dad4: netbsd:sleepq_block+0xb4
0xa3e9db14: netbsd:turnstile_block+0x318
0xa3e9db8c: netbsd:rw_enter+0x3c0
0xa3e9dbbc: netbsd:genfs_lock+0x68
0xa3e9dbe4: netbsd:VOP_LOCK+0x40
0xa3e9dc0c: netbsd:layer_lock+0x44
0xa3e9dc34: netbsd:VOP_LOCK+0x40
0xa3e9dc5c: netbsd:vn_lock+0x88
0xa3e9dcac: netbsd:lookup_once+0x224
0xa3e9dd7c: netbsd:namei_tryemulroot+0x528
0xa3e9ddb4: netbsd:namei+0x34
0xa3e9dddc: netbsd:fd_nameiat.isra.0+0x64
0xa3e9de4c: netbsd:do_sys_statat+0x84
0xa3e9df04: netbsd:sys___stat50+0x2c
0xa3e9df7c: netbsd:syscall+0xb8
0xa3e9dfac: netbsd:swi_handler+0xa0
db{1}> bt/a 0x0000000095768060
trace: pid 25124 lid 1 at 0xa3bc5a44
0xa3bc5a44: netbsd:mi_switch+0x10
0xa3bc5a74: netbsd:sleepq_block+0xb4
0xa3bc5ab4: netbsd:turnstile_block+0x318
0xa3bc5b2c: netbsd:rw_enter+0x3c0
0xa3bc5b5c: netbsd:genfs_lock+0x68
0xa3bc5b84: netbsd:VOP_LOCK+0x40
0xa3bc5bac: netbsd:layer_lock+0x44
0xa3bc5bd4: netbsd:VOP_LOCK+0x40
0xa3bc5bfc: netbsd:vn_lock+0x88
0xa3bc5c4c: netbsd:lookup_once+0x498
0xa3bc5d1c: netbsd:namei_tryemulroot+0x528
0xa3bc5d54: netbsd:namei+0x34
0xa3bc5e2c: netbsd:vn_open+0x94
0xa3bc5eac: netbsd:do_open+0xb0
0xa3bc5edc: netbsd:do_sys_openat+0x7c
0xa3bc5f04: netbsd:sys_open+0x38
0xa3bc5f7c: netbsd:syscall+0xb8
0xa3bc5fac: netbsd:swi_handler+0xa0
db{1}> bt/a 0x0000000095863c00
trace: pid 21191 lid 1 at 0xa1a398c4
0xa1a398c4: netbsd:mi_switch+0x10
0xa1a398f4: netbsd:sleepq_block+0xb4
0xa1a3992c: netbsd:cv_wait+0x130
0xa1a39954: netbsd:vwait+0x50
0xa1a3997c: netbsd:vget+0xd4
0xa1a399e4: netbsd:vcache_get+0x158
0xa1a39a14: netbsd:layer_node_create+0x2c
0xa1a39a44: netbsd:layer_lookup+0xfc
0xa1a39a7c: netbsd:VOP_LOOKUP+0x48
0xa1a39bdc: netbsd:getcwd_common+0x258
0xa1a39bfc: netbsd:vn_isunder+0x2c
0xa1a39c4c: netbsd:lookup_once+0xfc
0xa1a39d1c: netbsd:namei_tryemulroot+0x528
0xa1a39d54: netbsd:namei+0x34
0xa1a39e2c: netbsd:vn_open+0x94
0xa1a39eac: netbsd:do_open+0xb0
0xa1a39edc: netbsd:do_sys_openat+0x7c
0xa1a39f04: netbsd:sys_open+0x38
0xa1a39f7c: netbsd:syscall+0xb8
0xa1a39fac: netbsd:swi_handler+0xa0
db{1}> bt/a 94eed4a0
trace: pid 17838 lid 1 at 0x9f3f7a0c
0x9f3f7a0c: netbsd:mi_switch+0x10
0x9f3f7a3c: netbsd:sleepq_block+0xb4
0x9f3f7a7c: netbsd:turnstile_block+0x318
0x9f3f7af4: netbsd:rw_enter+0x3c0
0x9f3f7b24: netbsd:genfs_lock+0x68
0x9f3f7b4c: netbsd:VOP_LOCK+0x40
0x9f3f7b74: netbsd:layer_lock+0x44
0x9f3f7b9c: netbsd:VOP_LOCK+0x68
0x9f3f7bc4: netbsd:vn_lock+0x88
0x9f3f7bdc: netbsd:layerfs_root+0x38
0x9f3f7bfc: netbsd:VFS_ROOT+0x30
0x9f3f7c4c: netbsd:lookup_once+0x29c
0x9f3f7d1c: netbsd:namei_tryemulroot+0x528
0x9f3f7d54: netbsd:namei+0x34
0x9f3f7e2c: netbsd:vn_open+0x94
0x9f3f7eac: netbsd:do_open+0xb0
0x9f3f7edc: netbsd:do_sys_openat+0x7c
0x9f3f7f04: netbsd:sys_open+0x38
0x9f3f7f7c: netbsd:syscall+0xb8
0x9f3f7fac: netbsd:swi_handler+0xa0
db{1}> bt/a 95b74be0
trace: pid 7793 lid 1 at 0xa3bcba0c
0xa3bcba0c: netbsd:mi_switch+0x10
0xa3bcba3c: netbsd:sleepq_block+0xb4
0xa3bcba7c: netbsd:turnstile_block+0x318
0xa3bcbaf4: netbsd:rw_enter+0x3c0
0xa3bcbb24: netbsd:genfs_lock+0x68
0xa3bcbb4c: netbsd:VOP_LOCK+0x40
0xa3bcbb74: netbsd:layer_lock+0x44
0xa3bcbb9c: netbsd:VOP_LOCK+0x68
0xa3bcbbc4: netbsd:vn_lock+0x88
0xa3bcbbdc: netbsd:layerfs_root+0x38
0xa3bcbbfc: netbsd:VFS_ROOT+0x30
0xa3bcbc4c: netbsd:lookup_once+0x29c
0xa3bcbd1c: netbsd:namei_tryemulroot+0x528
0xa3bcbd54: netbsd:namei+0x34
0xa3bcbe2c: netbsd:vn_open+0x94
0xa3bcbeac: netbsd:do_open+0xb0
0xa3bcbedc: netbsd:do_sys_openat+0x7c
0xa3bcbf04: netbsd:sys_open+0x38
0xa3bcbf7c: netbsd:syscall+0xb8
0xa3bcbfac: netbsd:swi_handler+0xa0
db{1}> bt/a 95c6a940
trace: pid 24091 lid 1 at 0xa3a03a0c
0xa3a03a0c: netbsd:mi_switch+0x10
0xa3a03a3c: netbsd:sleepq_block+0xb4
0xa3a03a7c: netbsd:turnstile_block+0x318
0xa3a03af4: netbsd:rw_enter+0x3c0
0xa3a03b24: netbsd:genfs_lock+0x68
0xa3a03b4c: netbsd:VOP_LOCK+0x40
0xa3a03b74: netbsd:layer_lock+0x44
0xa3a03b9c: netbsd:VOP_LOCK+0x68
0xa3a03bc4: netbsd:vn_lock+0x88
0xa3a03bdc: netbsd:layerfs_root+0x38
0xa3a03bfc: netbsd:VFS_ROOT+0x30
0xa3a03c4c: netbsd:lookup_once+0x29c
0xa3a03d1c: netbsd:namei_tryemulroot+0x528
0xa3a03d54: netbsd:namei+0x34
0xa3a03e2c: netbsd:vn_open+0x94
0xa3a03eac: netbsd:do_open+0xb0
0xa3a03edc: netbsd:do_sys_openat+0x7c
0xa3a03f04: netbsd:sys_open+0x38
0xa3a03f7c: netbsd:syscall+0xb8
0xa3a03fac: netbsd:swi_handler+0xa0
db{1}> bt/a 95369360
trace: pid 16789 lid 1 at 0x9e569a6c
0x9e569a6c: netbsd:mi_switch+0x10
0x9e569a9c: netbsd:sleepq_block+0xb4
0x9e569adc: netbsd:turnstile_block+0x318
0x9e569b54: netbsd:rw_enter+0x3c0
0x9e569b84: netbsd:genfs_lock+0x68
0x9e569bac: netbsd:VOP_LOCK+0x40
0x9e569bd4: netbsd:layer_lock+0x44
0x9e569bfc: netbsd:VOP_LOCK+0x68
0x9e569c24: netbsd:vn_lock+0x88
0x9e569c3c: netbsd:layerfs_root+0x38
0x9e569c5c: netbsd:VFS_ROOT+0x30
0x9e569cac: netbsd:lookup_once+0x29c
0x9e569d7c: netbsd:namei_tryemulroot+0x528
0x9e569db4: netbsd:namei+0x34
0x9e569ddc: netbsd:fd_nameiat.isra.0+0x64
0x9e569e4c: netbsd:do_sys_statat+0x84
0x9e569f04: netbsd:sys___stat50+0x2c
0x9e569f7c: netbsd:syscall+0xb8
0x9e569fac: netbsd:swi_handler+0xa0
db{1}> bt/a 95893960
trace: pid 28195 lid 1 at 0xa11ffa0c
0xa11ffa0c: netbsd:mi_switch+0x10
0xa11ffa3c: netbsd:sleepq_block+0xb4
0xa11ffa7c: netbsd:turnstile_block+0x318
0xa11ffaf4: netbsd:rw_enter+0x3c0
0xa11ffb24: netbsd:genfs_lock+0x68
0xa11ffb4c: netbsd:VOP_LOCK+0x40
0xa11ffb74: netbsd:layer_lock+0x44
0xa11ffb9c: netbsd:VOP_LOCK+0x68
0xa11ffbc4: netbsd:vn_lock+0x88
0xa11ffbdc: netbsd:layerfs_root+0x38
0xa11ffbfc: netbsd:VFS_ROOT+0x30
0xa11ffc4c: netbsd:lookup_once+0x29c
0xa11ffd1c: netbsd:namei_tryemulroot+0x528
0xa11ffd54: netbsd:namei+0x34
0xa11ffe2c: netbsd:vn_open+0x94
0xa11ffeac: netbsd:do_open+0xb0
0xa11ffedc: netbsd:do_sys_openat+0x7c
0xa11fff04: netbsd:sys_open+0x38
0xa11fff7c: netbsd:syscall+0xb8
0xa11fffac: netbsd:swi_handler+0xa0
db{1}> bt/a 958628c0
trace: pid 18132 lid 1 at 0xa3e9daa4
0xa3e9daa4: netbsd:mi_switch+0x10
0xa3e9dad4: netbsd:sleepq_block+0xb4
0xa3e9db14: netbsd:turnstile_block+0x318
0xa3e9db8c: netbsd:rw_enter+0x3c0
0xa3e9dbbc: netbsd:genfs_lock+0x68
0xa3e9dbe4: netbsd:VOP_LOCK+0x40
0xa3e9dc0c: netbsd:layer_lock+0x44
0xa3e9dc34: netbsd:VOP_LOCK+0x40
0xa3e9dc5c: netbsd:vn_lock+0x88
0xa3e9dcac: netbsd:lookup_once+0x224
0xa3e9dd7c: netbsd:namei_tryemulroot+0x528
0xa3e9ddb4: netbsd:namei+0x34
0xa3e9dddc: netbsd:fd_nameiat.isra.0+0x64
0xa3e9de4c: netbsd:do_sys_statat+0x84
0xa3e9df04: netbsd:sys___stat50+0x2c
0xa3e9df7c: netbsd:syscall+0xb8
0xa3e9dfac: netbsd:swi_handler+0xa0
db{1}> bt/a 95768060
trace: pid 25124 lid 1 at 0xa3bc5a44
0xa3bc5a44: netbsd:mi_switch+0x10
0xa3bc5a74: netbsd:sleepq_block+0xb4
0xa3bc5ab4: netbsd:turnstile_block+0x318
0xa3bc5b2c: netbsd:rw_enter+0x3c0
0xa3bc5b5c: netbsd:genfs_lock+0x68
0xa3bc5b84: netbsd:VOP_LOCK+0x40
0xa3bc5bac: netbsd:layer_lock+0x44
0xa3bc5bd4: netbsd:VOP_LOCK+0x40
0xa3bc5bfc: netbsd:vn_lock+0x88
0xa3bc5c4c: netbsd:lookup_once+0x498
0xa3bc5d1c: netbsd:namei_tryemulroot+0x528
0xa3bc5d54: netbsd:namei+0x34
0xa3bc5e2c: netbsd:vn_open+0x94
0xa3bc5eac: netbsd:do_open+0xb0
0xa3bc5edc: netbsd:do_sys_openat+0x7c
0xa3bc5f04: netbsd:sys_open+0x38
0xa3bc5f7c: netbsd:syscall+0xb8
0xa3bc5fac: netbsd:swi_handler+0xa0
db{1}> bt/a 91c37120
trace: pid 0 lid 67 at 0x9aaabd64
0x9aaabd64: netbsd:mi_switch+0x10
0x9aaabd94: netbsd:sleepq_block+0xb4
0x9aaabdd4: netbsd:turnstile_block+0x318
0x9aaabe4c: netbsd:rw_enter+0x3c0
0x9aaabe7c: netbsd:genfs_lock+0x68
0x9aaabea4: netbsd:VOP_LOCK+0x40
0x9aaabecc: netbsd:vn_lock+0x88
0x9aaabf2c: netbsd:ffs_sync+0xb0
0x9aaabf4c: netbsd:VFS_SYNC+0x30
0x9aaabfac: netbsd:sched_sync+0x27c
db{1}> bt/a 91596840
trace: pid 0 lid 9 at 0x9a825d74
0x9a825d74: netbsd:mi_switch+0x10
0x9a825da4: netbsd:sleepq_block+0xb4
0x9a825de4: netbsd:turnstile_block+0x318
0x9a825e5c: netbsd:rw_enter+0x3c0
0x9a825e8c: netbsd:genfs_lock+0x68
0x9a825eb4: netbsd:VOP_LOCK+0x40
0x9a825edc: netbsd:layer_lock+0x44
0x9a825f04: netbsd:VOP_LOCK+0x40
0x9a825f2c: netbsd:vn_lock+0x88
0x9a825f5c: netbsd:vclean+0x74
0x9a825f8c: netbsd:cleanvnode+0xf4
0x9a825fac: netbsd:vdrain_thread+0x68
db{1}>
(gdb) kvm proc 0x0000000095863c00
0x812e9eb8 in mi_switch (l=l@entry=0x95863c00) at
/home/riz/src/sys/kern/kern_synch.c:719
719 in /home/riz/src/sys/kern/kern_synch.c
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95863c00) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812b80c0 in cv_wait (cv=cv@entry=0x9436ef4c, mtx=0x91f3f640) at
/home/riz/src/sys/kern/kern_condvar.c:217
#3 0x81363fc8 in vwait (vp=0x9436ef20, flags=flags@entry=1048576) at
/home/riz/src/sys/kern/vfs_vnode.c:1469
#4 0x813654a8 in vget (vp=vp@entry=0x9436ef20, flags=flags@entry=0,
waitok=waitok@entry=true) at /home/riz/src/sys/kern/vfs_vnode.c:463
#5 0x81365f74 in vcache_get (mp=0x94efd008, key=key@entry=0xa1a399f4,
key_len=key_len@entry=4, vpp=vpp@entry=0xa1a399fc) at
/home/riz/src/sys/kern/vfs_vnode.c:1148
#6 0x81379c74 in layer_node_create (mp=<optimized out>,
lowervp=lowervp@entry=0x92314df8, nvpp=0xa1a39ac4) at
/home/riz/src/sys/miscfs/genfs/layer_subr.c:120
#7 0x8137a478 in layer_lookup (v=0xa1a39a50) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:385
#8 0x8136f380 in VOP_LOOKUP (dvp=dvp@entry=0x92b811e0,
vpp=vpp@entry=0xa1a39ac4, cnp=cnp@entry=0xa1a39ad8) at
/home/riz/src/sys/kern/vnode_if.c:119
#9 0x813531f4 in getcwd_scandir (l=0x95863c00, bufp=0x0,
bpp=0xa1a39ac8, uvpp=0xa1a39ac4, lvpp=<synthetic pointer>) at
/home/riz/src/sys/kern/vfs_getcwd.c:136
#10 getcwd_common (lvp=lvp@entry=0x92b811e0, rvp=<optimized out>,
bpp=bpp@entry=0x0, bufp=bufp@entry=0x0, limit=limit@entry=512,
flags=flags@entry=0, l=l@entry=0x95863c00)
at /home/riz/src/sys/kern/vfs_getcwd.c:415
#11 0x8135358c in vn_isunder (lvp=lvp@entry=0x92b811e0, rvp=<optimized
out>, l=l@entry=0x95863c00) at /home/riz/src/sys/kern/vfs_getcwd.c:456
#12 0x813552d4 in lookup_once (state=state@entry=0xa1a39d28,
searchdir=0x92b811e0, newsearchdir_ret=newsearchdir_ret@entry=0xa1a39cb4,
foundobj_ret=foundobj_ret@entry=0xa1a39cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:947
#13 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#14 namei_tryemulroot (state=state@entry=0xa1a39d28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#15 0x813571a8 in namei (ndp=ndp@entry=0xa1a39e48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#16 0x813683dc in vn_open (ndp=ndp@entry=0xa1a39e48,
fmode=fmode@entry=1, cmode=cmode@entry=420) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#17 0x8135f938 in do_open (l=l@entry=0x95863c00, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=0, open_mode=open_mode@entry=438,
fd=fd@entry=0xa1a39eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#18 0x8135fa78 in do_sys_openat (l=0x95863c00, fdat=fdat@entry=-100,
path=<optimized out>, flags=0, mode=438, fd=fd@entry=0xa1a39eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#19 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa1a39f18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#20 0x81012cc4 in sy_call (rval=0xa1a39f18, uap=<optimized out>,
l=0x95863c00, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#21 sy_invoke (code=5, rval=0xa1a39f18, uap=<optimized out>,
l=0x95863c00, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#22 syscall (tf=0xa1a39fb0, l=0x95863c00, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#23 0x81012ecc in swi_handler (tf=0xa1a39fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) up
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
264 /home/riz/src/sys/kern/kern_sleepq.c: No such file or directory.
(gdb) up
#2 0x812b80c0 in cv_wait (cv=cv@entry=0x9436ef4c, mtx=0x91f3f640) at
/home/riz/src/sys/kern/kern_condvar.c:217
217 /home/riz/src/sys/kern/kern_condvar.c: No such file or directory.
(gdb) up
#3 0x81363fc8 in vwait (vp=0x9436ef20, flags=flags@entry=1048576) at
/home/riz/src/sys/kern/vfs_vnode.c:1469
1469 /home/riz/src/sys/kern/vfs_vnode.c: No such file or directory.
(gdb) l
1464 in /home/riz/src/sys/kern/vfs_vnode.c
(gdb) print *vp
$1 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x9436ef28},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x95863c00, 0x95863cb8,
0x8146e51c}}, v_size = 0, v_writesize = 0, v_iflag = 1048576,
v_vflag = 16, v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt
= 0, v_synclist_slot = 0,
v_mount = 0x94efd008, v_op = 0x9159ac48, v_freelist = {tqe_next =
0x0, tqe_prev = 0x8160afc0 <vnode_free_list>}, v_freelisthd = 0x0,
v_mntvnodes = {tqe_next = 0x0,
tqe_prev = 0x92b81258}, v_cleanblkhd = {lh_first = 0x0},
v_dirtyblkhd = {lh_first = 0x0}, v_synclist = {tqe_next = 0x0, tqe_prev
= 0x0}, v_dnclist = {lh_first = 0x0},
v_nclist = {lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket
= 0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_NULL, v_lock = {
rw_owner = 0}, v_data = 0x9659f760, v_klist = {slh_first = 0x0}}
(gdb)
(gdb) kvm proc 0x00000000958628c0
0x812e9eb8 in mi_switch (l=l@entry=0x958628c0) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x958628c0) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512d4d0, q=q@entry=1, obj=obj@entry=0x92314e9c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x92314e9c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92314df8, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x975cd640, flags=flags@entry=2)
at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=0x975cd640, flags=flags@entry=2) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x813553fc in lookup_once (state=state@entry=0xa3e9dd88,
searchdir=0x922ae9d0, newsearchdir_ret=newsearchdir_ret@entry=0xa3e9dd14,
foundobj_ret=foundobj_ret@entry=0xa3e9dd18) at
/home/riz/src/sys/kern/vfs_lookup.c:1065
#10 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#11 namei_tryemulroot (state=state@entry=0xa3e9dd88,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#12 0x813571a8 in namei (ndp=ndp@entry=0xa3e9ddf0) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#13 0x8135ce74 in fd_nameiat (fdat=fdat@entry=-100,
ndp=ndp@entry=0xa3e9ddf0, l=<optimized out>) at
/home/riz/src/sys/kern/vfs_syscalls.c:179
#14 0x81361004 in do_sys_statat (l=<optimized out>,
fdat=fdat@entry=-100, userpath=0x7fffde5e <error: Cannot access memory
at address 0x7fffde5e>, nd_flag=nd_flag@entry=64,
sb=sb@entry=0xa3e9de58) at /home/riz/src/sys/kern/vfs_syscalls.c:3042
#15 0x813610c4 in sys___stat50 (l=<optimized out>, uap=0xa3e9dfb8,
retval=<optimized out>) at /home/riz/src/sys/kern/vfs_syscalls.c:3067
#16 0x81012cc4 in sy_call (rval=0xa3e9df18, uap=<optimized out>,
l=0x958628c0, sy=0x8153e56c <sysent+8780>) at
/home/riz/src/sys/sys/syscallvar.h:65
#17 sy_invoke (code=439, rval=0xa3e9df18, uap=<optimized out>,
l=0x958628c0, sy=0x8153e56c <sysent+8780>) at
/home/riz/src/sys/sys/syscallvar.h:94
#18 syscall (tf=0xa3e9dfb0, l=0x958628c0, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#19 0x81012ecc in swi_handler (tf=0xa3e9dfb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) up
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
264 /home/riz/src/sys/kern/kern_sleepq.c: No such file or directory.
(gdb) up
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512d4d0, q=q@entry=1, obj=obj@entry=0x92314e9c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
430 /home/riz/src/sys/kern/kern_turnstile.c: No such file or directory.
(gdb) up
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x92314e9c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
387 /home/riz/src/sys/kern/kern_rwlock.c: No such file or directory.
(gdb) up
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
384 /home/riz/src/sys/miscfs/genfs/genfs_vnops.c: No such file or
directory.
(gdb) print vp->v_lock
$2 = {rw_owner = 2507571303}
(gdb) print *vp
$3 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x92314e00},
uo_npages = 0, uo_refs = 5, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x92314e24, 0x8146e51c}},
v_size = 2048, v_writesize = 2048, v_iflag = 0, v_vflag = 48, v_uflag
= 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x93142430, tqe_prev =
0x940c2fbc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x966c42f0,
tqe_prev = 0x92f3c428},
v_cleanblkhd = {lh_first = 0x94d56328}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x93489880}, v_nclist = {
lh_first = 0x93496380}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2507571303}, v_data = 0x92f40198, v_klist = {slh_first =
0x0}}
(gdb) kvm proc 0x0000000095768060
0x812e9eb8 in mi_switch (l=l@entry=0x95768060) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95768060) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>, ts@entry=0x0,
q=q@entry=1, obj=obj@entry=0x94c17784, sobj=sobj@entry=0x8153f5ac
<rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x94c17784,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x94c176e0, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x948159a0,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x948159a0, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81355670 in lookup_once (state=state@entry=0xa3bc5d28,
searchdir=0x948159a0, newsearchdir_ret=newsearchdir_ret@entry=0xa3bc5cb4,
foundobj_ret=foundobj_ret@entry=0xa3bc5cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1067
#10 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#11 namei_tryemulroot (state=state@entry=0xa3bc5d28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#12 0x813571a8 in namei (ndp=ndp@entry=0xa3bc5e48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#13 0x813683dc in vn_open (ndp=ndp@entry=0xa3bc5e48,
fmode=fmode@entry=1, cmode=cmode@entry=1324) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#14 0x8135f938 in do_open (l=l@entry=0x95768060, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=0, open_mode=open_mode@entry=5420,
fd=fd@entry=0xa3bc5eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#15 0x8135fa78 in do_sys_openat (l=0x95768060, fdat=fdat@entry=-100,
path=<optimized out>, flags=0, mode=5420, fd=fd@entry=0xa3bc5eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#16 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa3bc5f18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#17 0x81012cc4 in sy_call (rval=0xa3bc5f18, uap=<optimized out>,
l=0x95768060, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#18 sy_invoke (code=5, rval=0xa3bc5f18, uap=<optimized out>,
l=0x95768060, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#19 syscall (tf=0xa3bc5fb0, l=0x95768060, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#20 0x81012ecc in swi_handler (tf=0xa3bc5fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) up
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
264 /home/riz/src/sys/kern/kern_sleepq.c: No such file or directory.
(gdb) up
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>, ts@entry=0x0,
q=q@entry=1, obj=obj@entry=0x94c17784, sobj=sobj@entry=0x8153f5ac
<rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
430 /home/riz/src/sys/kern/kern_turnstile.c: No such file or directory.
(gdb) up
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x94c17784,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
387 /home/riz/src/sys/kern/kern_rwlock.c: No such file or directory.
(gdb) up
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
384 /home/riz/src/sys/miscfs/genfs/genfs_vnops.c: No such file or
directory.
(gdb) print *vp
$4 = {v_uobj = {vmobjlock = 0x946fde00, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x94c176e8},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x94c1770c, 0x8146e51c}},
v_size = 55808, v_writesize = 55808, v_iflag = 0, v_vflag = 48,
v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 7,
v_synclist_slot = 0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92314df8, tqe_prev =
0x9312eedc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x9202c010,
tqe_prev = 0x936a64a8},
v_cleanblkhd = {lh_first = 0x94f54d80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x926d78c0}, v_nclist = {
lh_first = 0x9252ba80}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2508602375}, v_data = 0x924cbc40, v_klist = {slh_first =
0x0}}
(gdb)
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: riz@tastylime.net
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Wed, 28 Oct 2015 20:13:04 +0100
> I used gdb to get some more information about *vp for some of the lwps.
Please do it for ALL lwps in tstile and ALL vnodes on these traces
that are arguments to VOP_LOCK or vget.
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Wed, 28 Oct 2015 12:39:49 -0700
This is hopefully all the LWPs in tstile (plus one that's not, but is
one being waited on), and the vnodes. Hopefully I didn't miss any.
(gdb) kvm proc 0x94eed4a0
0x812e9eb8 in mi_switch (l=l@entry=0x94eed4a0) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x94eed4a0) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512c7b0, q=q@entry=1, obj=obj@entry=0x9222643c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x9222643c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x95063f00,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x95063f00, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81379db0 in layerfs_root (mp=<optimized out>, vpp=0x9f3f7c1c) at
/home/riz/src/sys/miscfs/genfs/layer_vfsops.c:149
#10 0x8135bc10 in VFS_ROOT (mp=mp@entry=0x95061008,
a=a@entry=0x9f3f7c1c) at /home/riz/src/sys/kern/vfs_subr.c:1307
#11 0x81355474 in lookup_once (state=state@entry=0x9f3f7d28,
searchdir=0x91fe0180, newsearchdir_ret=newsearchdir_ret@entry=0x9f3f7cb4,
foundobj_ret=foundobj_ret@entry=0x9f3f7cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1094
#12 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#13 namei_tryemulroot (state=state@entry=0x9f3f7d28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#14 0x813571a8 in namei (ndp=ndp@entry=0x9f3f7e48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#15 0x813683dc in vn_open (ndp=ndp@entry=0x9f3f7e48,
fmode=fmode@entry=522, cmode=cmode@entry=420) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#16 0x8135f938 in do_open (l=l@entry=0x94eed4a0, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=521, open_mode=open_mode@entry=438,
fd=fd@entry=0x9f3f7eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#17 0x8135fa78 in do_sys_openat (l=0x94eed4a0, fdat=fdat@entry=-100,
path=<optimized out>, flags=521, mode=438, fd=fd@entry=0x9f3f7eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#18 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0x9f3f7f18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#19 0x81012cc4 in sy_call (rval=0x9f3f7f18, uap=<optimized out>,
l=0x94eed4a0, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#20 sy_invoke (code=5, rval=0x9f3f7f18, uap=<optimized out>,
l=0x94eed4a0, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#21 syscall (tf=0x9f3f7fb0, l=0x94eed4a0, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#22 0x81012ecc in swi_handler (tf=0x9f3f7fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$10 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x922263a0},
uo_npages = 0, uo_refs = 7, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x922263c4, 0x8146e51c}},
v_size = 512, v_writesize = 512, v_iflag = 0, v_vflag = 49, v_uflag =
0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92c3f8e8, tqe_prev =
0x91c1b71c}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x922aea80,
tqe_prev = 0x920d3010},
v_cleanblkhd = {lh_first = 0x92e3dd80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x92b6d7c0}, v_nclist = {
lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket = 0x0,
vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR,
v_tag = VT_UFS, v_lock = {
rw_owner = 2508597447}, v_data = 0x922240c0, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x95063f00,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$11 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x95063f08},
uo_npages = 0, uo_refs = 4, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x95063f2c, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 1, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x95061008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x95063e50, tqe_prev =
0x95061010}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x915818b0,
v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x95b74be0
0x812e9eb8 in mi_switch (l=l@entry=0x95b74be0) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95b74be0) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512c7b0, q=q@entry=1, obj=obj@entry=0x9222643c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x9222643c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x93f6ab08,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x93f6ab08, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81379db0 in layerfs_root (mp=<optimized out>, vpp=0xa3bcbc1c) at
/home/riz/src/sys/miscfs/genfs/layer_vfsops.c:149
#10 0x8135bc10 in VFS_ROOT (mp=mp@entry=0x94f02008,
a=a@entry=0xa3bcbc1c) at /home/riz/src/sys/kern/vfs_subr.c:1307
#11 0x81355474 in lookup_once (state=state@entry=0xa3bcbd28,
searchdir=0x93143a30, newsearchdir_ret=newsearchdir_ret@entry=0xa3bcbcb4,
foundobj_ret=foundobj_ret@entry=0xa3bcbcb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1094
#12 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#13 namei_tryemulroot (state=state@entry=0xa3bcbd28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#14 0x813571a8 in namei (ndp=ndp@entry=0xa3bcbe48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#15 0x813683dc in vn_open (ndp=ndp@entry=0xa3bcbe48,
fmode=fmode@entry=522, cmode=cmode@entry=420) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#16 0x8135f938 in do_open (l=l@entry=0x95b74be0, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=521, open_mode=open_mode@entry=438,
fd=fd@entry=0xa3bcbeec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#17 0x8135fa78 in do_sys_openat (l=0x95b74be0, fdat=fdat@entry=-100,
path=<optimized out>, flags=521, mode=438, fd=fd@entry=0xa3bcbeec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#18 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa3bcbf18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#19 0x81012cc4 in sy_call (rval=0xa3bcbf18, uap=<optimized out>,
l=0x95b74be0, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#20 sy_invoke (code=5, rval=0xa3bcbf18, uap=<optimized out>,
l=0x95b74be0, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#21 syscall (tf=0xa3bcbfb0, l=0x95b74be0, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#22 0x81012ecc in swi_handler (tf=0xa3bcbfb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$12 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x922263a0},
uo_npages = 0, uo_refs = 7, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x922263c4, 0x8146e51c}},
v_size = 512, v_writesize = 512, v_iflag = 0, v_vflag = 49, v_uflag =
0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92c3f8e8, tqe_prev =
0x91c1b71c}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x922aea80,
tqe_prev = 0x920d3010},
v_cleanblkhd = {lh_first = 0x92e3dd80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x92b6d7c0}, v_nclist = {
lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket = 0x0,
vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR,
v_tag = VT_UFS, v_lock = {
rw_owner = 2508597447}, v_data = 0x922240c0, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x93f6ab08,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$13 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x93f6ab10},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x93f6ab34, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 1, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x94f02008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x93f6aa58, tqe_prev =
0x94f02010}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x92e249f8,
v_klist = {slh_first = 0x0}}
(gdb) frame 8
#8 0x81367a34 in vn_lock (vp=vp@entry=0x93f6ab08, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
1034 /home/riz/src/sys/kern/vfs_vnops.c: No such file or directory.
(gdb) print *vp
$14 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x93f6ab10},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x93f6ab34, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 1, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x94f02008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x93f6aa58, tqe_prev =
0x94f02010}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x92e249f8,
v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x95c6a940
0x812e9eb8 in mi_switch (l=l@entry=0x95c6a940) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95c6a940) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512c7b0, q=q@entry=1, obj=obj@entry=0x9222643c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x9222643c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x95063f00,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x95063f00, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81379db0 in layerfs_root (mp=<optimized out>, vpp=0xa3a03c1c) at
/home/riz/src/sys/miscfs/genfs/layer_vfsops.c:149
#10 0x8135bc10 in VFS_ROOT (mp=mp@entry=0x95061008,
a=a@entry=0xa3a03c1c) at /home/riz/src/sys/kern/vfs_subr.c:1307
#11 0x81355474 in lookup_once (state=state@entry=0xa3a03d28,
searchdir=0x91fe0180, newsearchdir_ret=newsearchdir_ret@entry=0xa3a03cb4,
foundobj_ret=foundobj_ret@entry=0xa3a03cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1094
#12 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#13 namei_tryemulroot (state=state@entry=0xa3a03d28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#14 0x813571a8 in namei (ndp=ndp@entry=0xa3a03e48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#15 0x813683dc in vn_open (ndp=ndp@entry=0xa3a03e48,
fmode=fmode@entry=522, cmode=cmode@entry=420) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#16 0x8135f938 in do_open (l=l@entry=0x95c6a940, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=521, open_mode=open_mode@entry=438,
fd=fd@entry=0xa3a03eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#17 0x8135fa78 in do_sys_openat (l=0x95c6a940, fdat=fdat@entry=-100,
path=<optimized out>, flags=521, mode=438, fd=fd@entry=0xa3a03eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#18 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa3a03f18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#19 0x81012cc4 in sy_call (rval=0xa3a03f18, uap=<optimized out>,
l=0x95c6a940, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#20 sy_invoke (code=5, rval=0xa3a03f18, uap=<optimized out>,
l=0x95c6a940, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#21 syscall (tf=0xa3a03fb0, l=0x95c6a940, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#22 0x81012ecc in swi_handler (tf=0xa3a03fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$15 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x922263a0},
uo_npages = 0, uo_refs = 7, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x922263c4, 0x8146e51c}},
v_size = 512, v_writesize = 512, v_iflag = 0, v_vflag = 49, v_uflag =
0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92c3f8e8, tqe_prev =
0x91c1b71c}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x922aea80,
tqe_prev = 0x920d3010},
v_cleanblkhd = {lh_first = 0x92e3dd80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x92b6d7c0}, v_nclist = {
lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket = 0x0,
vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR,
v_tag = VT_UFS, v_lock = {
rw_owner = 2508597447}, v_data = 0x922240c0, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x95063f00,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$16 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x95063f08},
uo_npages = 0, uo_refs = 4, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x95063f2c, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 1, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x95061008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x95063e50, tqe_prev =
0x95061010}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x915818b0,
v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x95369360
0x812e9eb8 in mi_switch (l=l@entry=0x95369360) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95369360) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512c7b0, q=q@entry=1, obj=obj@entry=0x9222643c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x9222643c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x950c8de0,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x950c8de0, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81379db0 in layerfs_root (mp=<optimized out>, vpp=0x9e569c7c) at
/home/riz/src/sys/miscfs/genfs/layer_vfsops.c:149
#10 0x8135bc10 in VFS_ROOT (mp=mp@entry=0x95066008,
a=a@entry=0x9e569c7c) at /home/riz/src/sys/kern/vfs_subr.c:1307
#11 0x81355474 in lookup_once (state=state@entry=0x9e569d88,
searchdir=0x934918c0, newsearchdir_ret=newsearchdir_ret@entry=0x9e569d14,
foundobj_ret=foundobj_ret@entry=0x9e569d18) at
/home/riz/src/sys/kern/vfs_lookup.c:1094
#12 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#13 namei_tryemulroot (state=state@entry=0x9e569d88,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#14 0x813571a8 in namei (ndp=ndp@entry=0x9e569df0) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#15 0x8135ce74 in fd_nameiat (fdat=fdat@entry=-100,
ndp=ndp@entry=0x9e569df0, l=<optimized out>) at
/home/riz/src/sys/kern/vfs_syscalls.c:179
#16 0x81361004 in do_sys_statat (l=<optimized out>,
fdat=fdat@entry=-100, userpath=0x7fffde68 <error: Cannot access memory
at address 0x7fffde68>, nd_flag=nd_flag@entry=64,
sb=sb@entry=0x9e569e58) at /home/riz/src/sys/kern/vfs_syscalls.c:3042
#17 0x813610c4 in sys___stat50 (l=<optimized out>, uap=0x9e569fb8,
retval=<optimized out>) at /home/riz/src/sys/kern/vfs_syscalls.c:3067
#18 0x81012cc4 in sy_call (rval=0x9e569f18, uap=<optimized out>,
l=0x95369360, sy=0x8153e56c <sysent+8780>) at
/home/riz/src/sys/sys/syscallvar.h:65
#19 sy_invoke (code=439, rval=0x9e569f18, uap=<optimized out>,
l=0x95369360, sy=0x8153e56c <sysent+8780>) at
/home/riz/src/sys/sys/syscallvar.h:94
#20 syscall (tf=0x9e569fb0, l=0x95369360, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#21 0x81012ecc in swi_handler (tf=0x9e569fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$17 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x922263a0},
uo_npages = 0, uo_refs = 7, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x922263c4, 0x8146e51c}},
v_size = 512, v_writesize = 512, v_iflag = 0, v_vflag = 49, v_uflag =
0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92c3f8e8, tqe_prev =
0x91c1b71c}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x922aea80,
tqe_prev = 0x920d3010},
v_cleanblkhd = {lh_first = 0x92e3dd80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x92b6d7c0}, v_nclist = {
lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket = 0x0,
vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR,
v_tag = VT_UFS, v_lock = {
rw_owner = 2508597447}, v_data = 0x922240c0, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x950c8de0,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$18 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x950c8de8},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x950c8e0c, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 1, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x95066008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x950c8d30, tqe_prev =
0x95066010}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x92e24a28,
v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x95893960
0x812e9eb8 in mi_switch (l=l@entry=0x95893960) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95893960) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>, ts@entry=0x0,
q=q@entry=1, obj=obj@entry=0x9222643c, sobj=sobj@entry=0x8153f5ac
<rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x9222643c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x95063f00,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x95063f00, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81379db0 in layerfs_root (mp=<optimized out>, vpp=0xa11ffc1c) at
/home/riz/src/sys/miscfs/genfs/layer_vfsops.c:149
#10 0x8135bc10 in VFS_ROOT (mp=mp@entry=0x95061008,
a=a@entry=0xa11ffc1c) at /home/riz/src/sys/kern/vfs_subr.c:1307
#11 0x81355474 in lookup_once (state=state@entry=0xa11ffd28,
searchdir=0x91fe0180, newsearchdir_ret=newsearchdir_ret@entry=0xa11ffcb4,
foundobj_ret=foundobj_ret@entry=0xa11ffcb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1094
#12 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#13 namei_tryemulroot (state=state@entry=0xa11ffd28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#14 0x813571a8 in namei (ndp=ndp@entry=0xa11ffe48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#15 0x813683dc in vn_open (ndp=ndp@entry=0xa11ffe48,
fmode=fmode@entry=522, cmode=cmode@entry=420) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#16 0x8135f938 in do_open (l=l@entry=0x95893960, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=521, open_mode=open_mode@entry=438,
fd=fd@entry=0xa11ffeec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#17 0x8135fa78 in do_sys_openat (l=0x95893960, fdat=fdat@entry=-100,
path=<optimized out>, flags=521, mode=438, fd=fd@entry=0xa11ffeec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#18 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa11fff18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#19 0x81012cc4 in sy_call (rval=0xa11fff18, uap=<optimized out>,
l=0x95893960, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#20 sy_invoke (code=5, rval=0xa11fff18, uap=<optimized out>,
l=0x95893960, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#21 syscall (tf=0xa11fffb0, l=0x95893960, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#22 0x81012ecc in swi_handler (tf=0xa11fffb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92226398, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$19 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x922263a0},
uo_npages = 0, uo_refs = 7, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x922263c4, 0x8146e51c}},
v_size = 512, v_writesize = 512, v_iflag = 0, v_vflag = 49, v_uflag =
0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92c3f8e8, tqe_prev =
0x91c1b71c}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x922aea80,
tqe_prev = 0x920d3010},
v_cleanblkhd = {lh_first = 0x92e3dd80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x92b6d7c0}, v_nclist = {
lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket = 0x0,
vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR,
v_tag = VT_UFS, v_lock = {
rw_owner = 2508597447}, v_data = 0x922240c0, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x81370704 in VOP_LOCK (vp=vp@entry=0x95063f00,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$20 = {v_uobj = {vmobjlock = 0x9222b700, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x95063f08},
uo_npages = 0, uo_refs = 4, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x95063f2c, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 1, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x95061008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x95063e50, tqe_prev =
0x95061010}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x915818b0,
v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x958628c0
0x812e9eb8 in mi_switch (l=l@entry=0x958628c0) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x958628c0) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512d4d0, q=q@entry=1, obj=obj@entry=0x92314e9c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x92314e9c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92314df8, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x975cd640, flags=flags@entry=2)
at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=0x975cd640, flags=flags@entry=2) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x813553fc in lookup_once (state=state@entry=0xa3e9dd88,
searchdir=0x922ae9d0, newsearchdir_ret=newsearchdir_ret@entry=0xa3e9dd14,
foundobj_ret=foundobj_ret@entry=0xa3e9dd18) at
/home/riz/src/sys/kern/vfs_lookup.c:1065
#10 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#11 namei_tryemulroot (state=state@entry=0xa3e9dd88,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#12 0x813571a8 in namei (ndp=ndp@entry=0xa3e9ddf0) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#13 0x8135ce74 in fd_nameiat (fdat=fdat@entry=-100,
ndp=ndp@entry=0xa3e9ddf0, l=<optimized out>) at
/home/riz/src/sys/kern/vfs_syscalls.c:179
#14 0x81361004 in do_sys_statat (l=<optimized out>,
fdat=fdat@entry=-100, userpath=0x7fffde5e <error: Cannot access memory
at address 0x7fffde5e>, nd_flag=nd_flag@entry=64,
sb=sb@entry=0xa3e9de58) at /home/riz/src/sys/kern/vfs_syscalls.c:3042
#15 0x813610c4 in sys___stat50 (l=<optimized out>, uap=0xa3e9dfb8,
retval=<optimized out>) at /home/riz/src/sys/kern/vfs_syscalls.c:3067
#16 0x81012cc4 in sy_call (rval=0xa3e9df18, uap=<optimized out>,
l=0x958628c0, sy=0x8153e56c <sysent+8780>) at
/home/riz/src/sys/sys/syscallvar.h:65
#17 sy_invoke (code=439, rval=0xa3e9df18, uap=<optimized out>,
l=0x958628c0, sy=0x8153e56c <sysent+8780>) at
/home/riz/src/sys/sys/syscallvar.h:94
#18 syscall (tf=0xa3e9dfb0, l=0x958628c0, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#19 0x81012ecc in swi_handler (tf=0xa3e9dfb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92314df8, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$21 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x92314e00},
uo_npages = 0, uo_refs = 5, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x92314e24, 0x8146e51c}},
v_size = 2048, v_writesize = 2048, v_iflag = 0, v_vflag = 48, v_uflag
= 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x93142430, tqe_prev =
0x940c2fbc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x966c42f0,
tqe_prev = 0x92f3c428},
v_cleanblkhd = {lh_first = 0x94d56328}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x93489880}, v_nclist = {
lh_first = 0x93496380}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2507571303}, v_data = 0x92f40198, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x975cd640, flags=flags@entry=2)
at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$22 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x975cd648},
uo_npages = 0, uo_refs = 1, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x975cd66c, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 16, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x92e35008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},
v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x0, tqe_prev =
0x94c00980}, v_cleanblkhd = {lh_first = 0x0},
v_dirtyblkhd = {lh_first = 0x0}, v_synclist = {tqe_next = 0x0,
tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist = {lh_first =
0x0}, v_un = {vu_mountedhere = 0x0,
vu_socket = 0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx =
0x0}, v_type = VDIR, v_tag = VT_NULL, v_lock = {rw_owner = 0}, v_data =
0x9665fb58, v_klist = {
slh_first = 0x0}}
(gdb) kvm proc 0x95768060
0x812e9eb8 in mi_switch (l=l@entry=0x95768060) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95768060) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>, ts@entry=0x0,
q=q@entry=1, obj=obj@entry=0x94c17784, sobj=sobj@entry=0x8153f5ac
<rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x94c17784,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x94c176e0, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x948159a0,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x948159a0, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81355670 in lookup_once (state=state@entry=0xa3bc5d28,
searchdir=0x948159a0, newsearchdir_ret=newsearchdir_ret@entry=0xa3bc5cb4,
foundobj_ret=foundobj_ret@entry=0xa3bc5cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1067
#10 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#11 namei_tryemulroot (state=state@entry=0xa3bc5d28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#12 0x813571a8 in namei (ndp=ndp@entry=0xa3bc5e48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#13 0x813683dc in vn_open (ndp=ndp@entry=0xa3bc5e48,
fmode=fmode@entry=1, cmode=cmode@entry=1324) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#14 0x8135f938 in do_open (l=l@entry=0x95768060, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=0, open_mode=open_mode@entry=5420,
fd=fd@entry=0xa3bc5eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#15 0x8135fa78 in do_sys_openat (l=0x95768060, fdat=fdat@entry=-100,
path=<optimized out>, flags=0, mode=5420, fd=fd@entry=0xa3bc5eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#16 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa3bc5f18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#17 0x81012cc4 in sy_call (rval=0xa3bc5f18, uap=<optimized out>,
l=0x95768060, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#18 sy_invoke (code=5, rval=0xa3bc5f18, uap=<optimized out>,
l=0x95768060, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#19 syscall (tf=0xa3bc5fb0, l=0x95768060, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#20 0x81012ecc in swi_handler (tf=0xa3bc5fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x94c176e0, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$23 = {v_uobj = {vmobjlock = 0x946fde00, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x94c176e8},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x94c1770c, 0x8146e51c}},
v_size = 55808, v_writesize = 55808, v_iflag = 0, v_vflag = 48,
v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 7,
v_synclist_slot = 0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x92314df8, tqe_prev =
0x9312eedc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x9202c010,
tqe_prev = 0x936a64a8},
v_cleanblkhd = {lh_first = 0x94f54d80}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x926d78c0}, v_nclist = {
lh_first = 0x9252ba80}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2508602375}, v_data = 0x924cbc40, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x948159a0,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$24 = {v_uobj = {vmobjlock = 0x946fde00, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x948159a8},
uo_npages = 0, uo_refs = 1, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x948159cc, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 16, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x94c60008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev =
0x8160afc0 <vnode_free_list>}, v_freelisthd = 0x0, v_mntvnodes =
{tqe_next = 0x9246d850, tqe_prev = 0x93ea6ce8},
v_cleanblkhd = {lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0},
v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first =
0x0}, v_nclist = {lh_first = 0x0},
v_un = {vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0},
v_data = 0x952b8278, v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x91c37120
0x812e9eb8 in mi_switch (l=l@entry=0x91c37120) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x91c37120) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>,
ts@entry=0x9512d4d0, q=q@entry=1, obj=obj@entry=0x92314e9c,
sobj=sobj@entry=0x8153f5ac <rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x92314e9c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=vp@entry=0x92314df8, flags=flags@entry=2)
at /home/riz/src/sys/kern/vnode_if.c:1166
#6 0x81367a34 in vn_lock (vp=vp@entry=0x92314df8, flags=flags@entry=2)
at /home/riz/src/sys/kern/vfs_vnops.c:1034
#7 0x8127b0f8 in ffs_sync (mp=0x920d3008, waitfor=3, cred=0x91591ec0)
at /home/riz/src/sys/ufs/ffs/ffs_vfsops.c:1882
#8 0x8135bd48 in VFS_SYNC (mp=mp@entry=0x920d3008, a=a@entry=3,
b=<optimized out>) at /home/riz/src/sys/kern/vfs_subr.c:1355
#9 0x8135c004 in sched_sync (arg=<unavailable>) at
/home/riz/src/sys/kern/vfs_subr.c:783
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=vp@entry=0x92314df8, flags=flags@entry=2)
at /home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$25 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x92314e00},
uo_npages = 0, uo_refs = 5, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x92314e24, 0x8146e51c}},
v_size = 2048, v_writesize = 2048, v_iflag = 0, v_vflag = 48, v_uflag
= 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x93142430, tqe_prev =
0x940c2fbc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x966c42f0,
tqe_prev = 0x92f3c428},
v_cleanblkhd = {lh_first = 0x94d56328}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x93489880}, v_nclist = {
lh_first = 0x93496380}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2507571303}, v_data = 0x92f40198, v_klist = {slh_first =
0x0}}
(gdb) frame 6
#6 0x81367a34 in vn_lock (vp=vp@entry=0x92314df8, flags=flags@entry=2)
at /home/riz/src/sys/kern/vfs_vnops.c:1034
1034 /home/riz/src/sys/kern/vfs_vnops.c: No such file or directory.
(gdb) print *vp
$26 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x92314e00},
uo_npages = 0, uo_refs = 5, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x92314e24, 0x8146e51c}},
v_size = 2048, v_writesize = 2048, v_iflag = 0, v_vflag = 48, v_uflag
= 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x93142430, tqe_prev =
0x940c2fbc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x966c42f0,
tqe_prev = 0x92f3c428},
v_cleanblkhd = {lh_first = 0x94d56328}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x93489880}, v_nclist = {
lh_first = 0x93496380}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2507571303}, v_data = 0x92f40198, v_klist = {slh_first =
0x0}}
(gdb) kvm proc 0x91596840
0x812e9eb8 in mi_switch (l=l@entry=0x91596840) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x91596840) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812f4dd0 in turnstile_block (ts=<optimized out>, ts@entry=0x0,
q=q@entry=1, obj=obj@entry=0x92314e9c, sobj=sobj@entry=0x8153f5ac
<rw_syncobj>)
at /home/riz/src/sys/kern/kern_turnstile.c:430
#3 0x812e1834 in rw_vector_enter (rw=rw@entry=0x92314e9c,
op=op@entry=RW_WRITER) at /home/riz/src/sys/kern/kern_rwlock.c:387
#4 0x813795f8 in genfs_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/genfs_vnops.c:384
#5 0x813706dc in VOP_LOCK (vp=0x92314df8, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
#6 0x8137a990 in layer_lock (v=<optimized out>) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:733
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x9436ef20,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
#8 0x81367a34 in vn_lock (vp=vp@entry=0x9436ef20, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
#9 0x81363b34 in vclean (vp=vp@entry=0x9436ef20) at
/home/riz/src/sys/kern/vfs_vnode.c:891
#10 0x81365028 in cleanvnode () at /home/riz/src/sys/kern/vfs_vnode.c:366
#11 0x81365224 in vdrain_thread (cookie=<unavailable>) at
/home/riz/src/sys/kern/vfs_vnode.c:386
(gdb) frame 5
#5 0x813706dc in VOP_LOCK (vp=0x92314df8, flags=<optimized out>) at
/home/riz/src/sys/kern/vnode_if.c:1166
1166 /home/riz/src/sys/kern/vnode_if.c: No such file or directory.
(gdb) print *vp
$27 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x92314e00},
uo_npages = 0, uo_refs = 5, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x92314e24, 0x8146e51c}},
v_size = 2048, v_writesize = 2048, v_iflag = 0, v_vflag = 48, v_uflag
= 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x93142430, tqe_prev =
0x940c2fbc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x966c42f0,
tqe_prev = 0x92f3c428},
v_cleanblkhd = {lh_first = 0x94d56328}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x93489880}, v_nclist = {
lh_first = 0x93496380}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2507571303}, v_data = 0x92f40198, v_klist = {slh_first =
0x0}}
(gdb) frame 7
#7 0x813706dc in VOP_LOCK (vp=vp@entry=0x9436ef20,
flags=flags@entry=131074) at /home/riz/src/sys/kern/vnode_if.c:1166
1166 in /home/riz/src/sys/kern/vnode_if.c
(gdb) print *vp
$28 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x9436ef28},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x95863c00, 0x95863cb8,
0x8146e51c}}, v_size = 0, v_writesize = 0, v_iflag = 1048576,
v_vflag = 16, v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt
= 0, v_synclist_slot = 0,
v_mount = 0x94efd008, v_op = 0x9159ac48, v_freelist = {tqe_next =
0x0, tqe_prev = 0x8160afc0 <vnode_free_list>}, v_freelisthd = 0x0,
v_mntvnodes = {tqe_next = 0x0,
tqe_prev = 0x92b81258}, v_cleanblkhd = {lh_first = 0x0},
v_dirtyblkhd = {lh_first = 0x0}, v_synclist = {tqe_next = 0x0, tqe_prev
= 0x0}, v_dnclist = {lh_first = 0x0},
v_nclist = {lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket
= 0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_NULL, v_lock = {
rw_owner = 0}, v_data = 0x9659f760, v_klist = {slh_first = 0x0}}
(gdb) frame 8
#8 0x81367a34 in vn_lock (vp=vp@entry=0x9436ef20, flags=131074) at
/home/riz/src/sys/kern/vfs_vnops.c:1034
1034 /home/riz/src/sys/kern/vfs_vnops.c: No such file or directory.
(gdb) print *vp
$29 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x9436ef28},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x95863c00, 0x95863cb8,
0x8146e51c}}, v_size = 0, v_writesize = 0, v_iflag = 1048576,
v_vflag = 16, v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt
= 0, v_synclist_slot = 0,
v_mount = 0x94efd008, v_op = 0x9159ac48, v_freelist = {tqe_next =
0x0, tqe_prev = 0x8160afc0 <vnode_free_list>}, v_freelisthd = 0x0,
v_mntvnodes = {tqe_next = 0x0,
tqe_prev = 0x92b81258}, v_cleanblkhd = {lh_first = 0x0},
v_dirtyblkhd = {lh_first = 0x0}, v_synclist = {tqe_next = 0x0, tqe_prev
= 0x0}, v_dnclist = {lh_first = 0x0},
v_nclist = {lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket
= 0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_NULL, v_lock = {
rw_owner = 0}, v_data = 0x9659f760, v_klist = {slh_first = 0x0}}
(gdb) kvm proc 0x0000000095863c00
0x812e9eb8 in mi_switch (l=l@entry=0x95863c00) at
/home/riz/src/sys/kern/kern_synch.c:719
719 /home/riz/src/sys/kern/kern_synch.c: No such file or directory.
(gdb) bt
#0 0x812e9eb8 in mi_switch (l=l@entry=0x95863c00) at
/home/riz/src/sys/kern/kern_synch.c:719
#1 0x812e6b9c in sleepq_block (timo=timo@entry=0,
catch_p=catch_p@entry=false) at /home/riz/src/sys/kern/kern_sleepq.c:264
#2 0x812b80c0 in cv_wait (cv=cv@entry=0x9436ef4c, mtx=0x91f3f640) at
/home/riz/src/sys/kern/kern_condvar.c:217
#3 0x81363fc8 in vwait (vp=0x9436ef20, flags=flags@entry=1048576) at
/home/riz/src/sys/kern/vfs_vnode.c:1469
#4 0x813654a8 in vget (vp=vp@entry=0x9436ef20, flags=flags@entry=0,
waitok=waitok@entry=true) at /home/riz/src/sys/kern/vfs_vnode.c:463
#5 0x81365f74 in vcache_get (mp=0x94efd008, key=key@entry=0xa1a399f4,
key_len=key_len@entry=4, vpp=vpp@entry=0xa1a399fc) at
/home/riz/src/sys/kern/vfs_vnode.c:1148
#6 0x81379c74 in layer_node_create (mp=<optimized out>,
lowervp=lowervp@entry=0x92314df8, nvpp=0xa1a39ac4) at
/home/riz/src/sys/miscfs/genfs/layer_subr.c:120
#7 0x8137a478 in layer_lookup (v=0xa1a39a50) at
/home/riz/src/sys/miscfs/genfs/layer_vnops.c:385
#8 0x8136f380 in VOP_LOOKUP (dvp=dvp@entry=0x92b811e0,
vpp=vpp@entry=0xa1a39ac4, cnp=cnp@entry=0xa1a39ad8) at
/home/riz/src/sys/kern/vnode_if.c:119
#9 0x813531f4 in getcwd_scandir (l=0x95863c00, bufp=0x0,
bpp=0xa1a39ac8, uvpp=0xa1a39ac4, lvpp=<synthetic pointer>) at
/home/riz/src/sys/kern/vfs_getcwd.c:136
#10 getcwd_common (lvp=lvp@entry=0x92b811e0, rvp=<optimized out>,
bpp=bpp@entry=0x0, bufp=bufp@entry=0x0, limit=limit@entry=512,
flags=flags@entry=0, l=l@entry=0x95863c00)
at /home/riz/src/sys/kern/vfs_getcwd.c:415
#11 0x8135358c in vn_isunder (lvp=lvp@entry=0x92b811e0, rvp=<optimized
out>, l=l@entry=0x95863c00) at /home/riz/src/sys/kern/vfs_getcwd.c:456
#12 0x813552d4 in lookup_once (state=state@entry=0xa1a39d28,
searchdir=0x92b811e0, newsearchdir_ret=newsearchdir_ret@entry=0xa1a39cb4,
foundobj_ret=foundobj_ret@entry=0xa1a39cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:947
#13 0x813560a8 in namei_oneroot (isnfsd=<optimized out>,
inhibitmagic=<optimized out>, neverfollow=<optimized out>,
state=<optimized out>)
at /home/riz/src/sys/kern/vfs_lookup.c:1215
#14 namei_tryemulroot (state=state@entry=0xa1a39d28,
neverfollow=neverfollow@entry=0, inhibitmagic=inhibitmagic@entry=0,
isnfsd=isnfsd@entry=0)
at /home/riz/src/sys/kern/vfs_lookup.c:1469
#15 0x813571a8 in namei (ndp=ndp@entry=0xa1a39e48) at
/home/riz/src/sys/kern/vfs_lookup.c:1505
#16 0x813683dc in vn_open (ndp=ndp@entry=0xa1a39e48,
fmode=fmode@entry=1, cmode=cmode@entry=420) at
/home/riz/src/sys/kern/vfs_vnops.c:175
#17 0x8135f938 in do_open (l=l@entry=0x95863c00, dvp=0x0, pb=<optimized
out>, open_flags=open_flags@entry=0, open_mode=open_mode@entry=438,
fd=fd@entry=0xa1a39eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1578
#18 0x8135fa78 in do_sys_openat (l=0x95863c00, fdat=fdat@entry=-100,
path=<optimized out>, flags=0, mode=438, fd=fd@entry=0xa1a39eec)
at /home/riz/src/sys/kern/vfs_syscalls.c:1658
#19 0x8135fb60 in sys_open (l=<optimized out>, uap=<optimized out>,
retval=0xa1a39f18) at /home/riz/src/sys/kern/vfs_syscalls.c:1678
#20 0x81012cc4 in sy_call (rval=0xa1a39f18, uap=<optimized out>,
l=0x95863c00, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:65
#21 sy_invoke (code=5, rval=0xa1a39f18, uap=<optimized out>,
l=0x95863c00, sy=0x8153c384 <sysent+100>) at
/home/riz/src/sys/sys/syscallvar.h:94
#22 syscall (tf=0xa1a39fb0, l=0x95863c00, insn=<optimized out>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:253
#23 0x81012ecc in swi_handler (tf=0xa1a39fb0, tf@entry=<error reading
variable: Register 25 is not available>) at
/home/riz/src/sys/arch/arm/arm/syscall.c:188
(gdb) frame 4
#4 0x813654a8 in vget (vp=vp@entry=0x9436ef20, flags=flags@entry=0,
waitok=waitok@entry=true) at /home/riz/src/sys/kern/vfs_vnode.c:463
463 /home/riz/src/sys/kern/vfs_vnode.c: No such file or directory.
(gdb) print *vp
$30 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x9436ef28},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x95863c00, 0x95863cb8,
0x8146e51c}}, v_size = 0, v_writesize = 0, v_iflag = 1048576,
v_vflag = 16, v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt
= 0, v_synclist_slot = 0,
v_mount = 0x94efd008, v_op = 0x9159ac48, v_freelist = {tqe_next =
0x0, tqe_prev = 0x8160afc0 <vnode_free_list>}, v_freelisthd = 0x0,
v_mntvnodes = {tqe_next = 0x0,
tqe_prev = 0x92b81258}, v_cleanblkhd = {lh_first = 0x0},
v_dirtyblkhd = {lh_first = 0x0}, v_synclist = {tqe_next = 0x0, tqe_prev
= 0x0}, v_dnclist = {lh_first = 0x0},
v_nclist = {lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket
= 0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_NULL, v_lock = {
rw_owner = 0}, v_data = 0x9659f760, v_klist = {slh_first = 0x0}}
(gdb) frame 3
#3 0x81363fc8 in vwait (vp=0x9436ef20, flags=flags@entry=1048576) at
/home/riz/src/sys/kern/vfs_vnode.c:1469
1469 in /home/riz/src/sys/kern/vfs_vnode.c
(gdb) print *vp
$31 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x9436ef28},
uo_npages = 0, uo_refs = 2, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x95863c00, 0x95863cb8,
0x8146e51c}}, v_size = 0, v_writesize = 0, v_iflag = 1048576,
v_vflag = 16, v_uflag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt
= 0, v_synclist_slot = 0,
v_mount = 0x94efd008, v_op = 0x9159ac48, v_freelist = {tqe_next =
0x0, tqe_prev = 0x8160afc0 <vnode_free_list>}, v_freelisthd = 0x0,
v_mntvnodes = {tqe_next = 0x0,
tqe_prev = 0x92b81258}, v_cleanblkhd = {lh_first = 0x0},
v_dirtyblkhd = {lh_first = 0x0}, v_synclist = {tqe_next = 0x0, tqe_prev
= 0x0}, v_dnclist = {lh_first = 0x0},
v_nclist = {lh_first = 0x0}, v_un = {vu_mountedhere = 0x0, vu_socket
= 0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_NULL, v_lock = {
rw_owner = 0}, v_data = 0x9659f760, v_klist = {slh_first = 0x0}}
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Jeff Rizzo <riz@tastylime.net>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 14:08:53 +0100
Jeff,
may I ask for another node:
kvm proc 0x95768060
fr 9
print foundobj
print *foundobj
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org, netbsd-bugs@netbsd.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 07:02:25 -0700
Gladly:
(gdb) kvm proc 0x95768060
0x812e9eb8 in mi_switch (l=l@entry=0x95768060) at
/home/riz/src/sys/kern/kern_synch.c:719
719 in /home/riz/src/sys/kern/kern_synch.c
(gdb) fr 9
#9 0x81355670 in lookup_once (state=state@entry=0xa3bc5d28,
searchdir=0x948159a0, newsearchdir_ret=newsearchdir_ret@entry=0xa3bc5cb4,
foundobj_ret=foundobj_ret@entry=0xa3bc5cb8) at
/home/riz/src/sys/kern/vfs_lookup.c:1067
1067 /home/riz/src/sys/kern/vfs_lookup.c: No such file or directory.
(gdb) print foundobj
$1 = (struct vnode *) 0x9246d850
(gdb) print *foundobj
$2 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x9246d858},
uo_npages = 0, uo_refs = 1, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x9246d87c, 0x8146e51c}},
v_size = 0, v_writesize = 0, v_iflag = 0, v_vflag = 16, v_uflag = 0,
v_numoutput = 0, v_writecount = 0, v_holdcnt = 0, v_synclist_slot = 0,
v_mount = 0x94c60008,
v_op = 0x9159ac48, v_freelist = {tqe_next = 0x0, tqe_prev =
0x9436ef8c}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x0, tqe_prev
= 0x94815a18}, v_cleanblkhd = {
lh_first = 0x0}, v_dirtyblkhd = {lh_first = 0x0}, v_synclist =
{tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist = {lh_first = 0x0}, v_nclist
= {lh_first = 0x0}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specnode = 0x0,
vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type = VDIR, v_tag = VT_NULL,
v_lock = {rw_owner = 0}, v_data = 0x966580b0,
v_klist = {slh_first = 0x0}}
(gdb)
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Jeff Rizzo <riz@tastylime.net>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 15:14:36 +0100
We now have the layer node, next is:
print *((struct layer_node *)0x966580b0)
print *((struct layer_node *)0x966580b0)->layer_lowervp
to get the lower node.
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 07:19:31 -0700
(gdb) print *((struct layer_node *)0x966580b0)
$3 = {layer_lowervp = 0x92314df8, layer_vnode = 0x9246d850, layer_flags = 0}
(gdb) print *((struct layer_node *)0x966580b0)->layer_lowervp
$4 = {v_uobj = {vmobjlock = 0x91f3f640, pgops = 0x81423cc0
<uvm_vnodeops>, memq = {tqh_first = 0x0, tqh_last = 0x92314e00},
uo_npages = 0, uo_refs = 5, rb_tree = {
rbt_root = 0x0, rbt_ops = 0x81423c00 <uvm_page_tree_ops>,
rbt_minmax = {0x0, 0x0}}, uo_ubc = {lh_first = 0x0}}, v_cv = {cv_opaque
= {0x0, 0x92314e24, 0x8146e51c}},
v_size = 2048, v_writesize = 2048, v_iflag = 0, v_vflag = 48, v_uflag
= 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 1, v_synclist_slot =
0, v_mount = 0x920d3008,
v_op = 0x9159a548, v_freelist = {tqe_next = 0x93142430, tqe_prev =
0x940c2fbc}, v_freelisthd = 0x0, v_mntvnodes = {tqe_next = 0x966c42f0,
tqe_prev = 0x92f3c428},
v_cleanblkhd = {lh_first = 0x94d56328}, v_dirtyblkhd = {lh_first =
0x0}, v_synclist = {tqe_next = 0x0, tqe_prev = 0x0}, v_dnclist =
{lh_first = 0x93489880}, v_nclist = {
lh_first = 0x93496380}, v_un = {vu_mountedhere = 0x0, vu_socket =
0x0, vu_specnode = 0x0, vu_fifoinfo = 0x0, vu_ractx = 0x0}, v_type =
VDIR, v_tag = VT_UFS, v_lock = {
rw_owner = 2507571303}, v_data = 0x92f40198, v_klist = {slh_first =
0x0}}
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Jeff Rizzo <riz@tastylime.net>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 15:40:24 +0100
First analysis is:
Thread 0x91596840 (0.9 vdrain) tries to clean vnode 0x9436ef20.
Vnode 0x9436ef20 is VT_NULL, VDIR with lower vnode 0x92314df8.
Lower vnode is VT_UFS, VDIR currently held by thread 0x95768060 (25124.1 =
make).
Thread 0x95768060 (25124.1 make) holds vnode 0x9246d850.
Vnode 0x9246d850 is VT_NULL, VDIR with lower vnode 0x92314df8.
Lower vnode is VT_UFS, VDIR.
Thread 0x95768060 (25124.1 make) tries to lock vnode 0x948159a0.
Vnode 0x948159a0 is VT_NULL, VDIR with lower vnode 0x94c176e0.
Lower vnode is VT_UFS, VDIR currently held by thread 0x95863c00.
Thread 0x95863c00 tries to vget 0x9436ef20.
Deadlock.
Thread 0x95768060 (25124.1 make) tries to lock here:
if (searchdir !=3D foundobj) {
if (cnp->cn_flags & ISDOTDOT)
VOP_UNLOCK(searchdir);
error =3D vn_lock(foundobj, LK_EXCLUSIVE);
if (cnp->cn_flags & ISDOTDOT)
=3D=3D=3D> vn_lock(searchdir, LK_EXCLUSIVE | =
LK_RETRY);
if (error !=3D 0) {
vrele(foundobj);
goto done;
}
}
Thread 0x95863c00 calls VOP_LOOKUP() with locked vnode 0x92b811e0 here:
cn.cn_nameiop =3D LOOKUP;
cn.cn_flags =3D ISLASTCN | ISDOTDOT | RDONLY;
cn.cn_cred =3D cred;
cn.cn_nameptr =3D "..";
cn.cn_namelen =3D 2;
cn.cn_consume =3D 0;
/* At this point, lvp is locked */
=3D=3D=3D> error =3D VOP_LOOKUP(lvp, uvpp, &cn);
vput(lvp);
So we have two layerfs vnodes with the same lower vnode:
1) (upper 0x9436ef20 lower 0x92314df8)
2) (upper 0x9246d850 lower 0x92314df8).
The first node gets cleaned from vdrain_thread -> cleanvnode -> vclean =
and
here vclean wants to lock it.
The second node is the "foundobj" from thread 0x95768060 (25124.1 make),
currently referenced and locked.
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
Responsible-Changed-From-To: kern-bug-people->hannken
Responsible-Changed-By: hannken@NetBSD.org
Responsible-Changed-When: Thu, 29 Oct 2015 14:46:41 +0000
Responsible-Changed-Why:
Take.
State-Changed-From-To: open->analyzed
State-Changed-By: hannken@NetBSD.org
State-Changed-When: Thu, 29 Oct 2015 14:46:41 +0000
State-Changed-Why:
Problem understood.
From: Konrad Schroder <perseant@hhhh.org>
To: gnats-bugs@NetBSD.org, kern-bug-people@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, riz@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 17:19:08 -0700
Forgive me for a possibly impertinent question, but what is the output
of "mount" on this system?
A quick read of hannken's analysis makes me think that the basic problem
here is that the two imposed locking orders (parent directory before
subdirectory, and upper before lower) are in direct conflict when a
directory is null-mounted onto a subdirectory of itself. In this case,
based on which vnodes are "..", it seems to me that we have something like
/dev/foo on /e0 type ffs
/e0/f8 on /e0/f8/20 type null
/e0/f8 on /somewhere/50 type null
where f8 has vnode 0x92314df8, 20 has vnode 0x9436ef20 and 50 has vnode
0x9246d850; and e0 has vnode 0x92b811e0. There might be more directory
layers between e0 and f8, and between f8 and 20.
If that does match the structure of the mount points, there could be a
very similar deadlock involving only one null mount: someone holds
/e0/f8 and tries to lock /e0/f8/20 as a subdirectory; someone else holds
/e0/f8/20, and tries to lock /e0/f8/20/f8 as a subdirectory---but that
is over /e0/f8, deadlock.
Thanks,
Konrad Schroder
perseant@hhhh.org
On 10/29/15 7:45 AM, J. Hannken-Illjes wrote:
> The following reply was made to PR kern/50375; it has been noted by GNATS.
>
> From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
> To: gnats-bugs@NetBSD.org
> Cc: Jeff Rizzo <riz@tastylime.net>
> Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
> Date: Thu, 29 Oct 2015 15:40:24 +0100
>
> First analysis is:
>
> Thread 0x91596840 (0.9 vdrain) tries to clean vnode 0x9436ef20.
>
> Vnode 0x9436ef20 is VT_NULL, VDIR with lower vnode 0x92314df8.
> Lower vnode is VT_UFS, VDIR currently held by thread 0x95768060 (25124.1 =
> make).
>
> Thread 0x95768060 (25124.1 make) holds vnode 0x9246d850.
>
> Vnode 0x9246d850 is VT_NULL, VDIR with lower vnode 0x92314df8.
> Lower vnode is VT_UFS, VDIR.
>
> Thread 0x95768060 (25124.1 make) tries to lock vnode 0x948159a0.
>
> Vnode 0x948159a0 is VT_NULL, VDIR with lower vnode 0x94c176e0.
> Lower vnode is VT_UFS, VDIR currently held by thread 0x95863c00.
>
> Thread 0x95863c00 tries to vget 0x9436ef20.
>
> Deadlock.
>
>
> Thread 0x95768060 (25124.1 make) tries to lock here:
>
> if (searchdir !=3D foundobj) {
> if (cnp->cn_flags & ISDOTDOT)
> VOP_UNLOCK(searchdir);
> error =3D vn_lock(foundobj, LK_EXCLUSIVE);
> if (cnp->cn_flags & ISDOTDOT)
> =3D=3D=3D> vn_lock(searchdir, LK_EXCLUSIVE | =
> LK_RETRY);
> if (error !=3D 0) {
> vrele(foundobj);
> goto done;
> }
> }
>
> Thread 0x95863c00 calls VOP_LOOKUP() with locked vnode 0x92b811e0 here:
>
> cn.cn_nameiop =3D LOOKUP;
> cn.cn_flags =3D ISLASTCN | ISDOTDOT | RDONLY;
> cn.cn_cred =3D cred;
> cn.cn_nameptr =3D "..";
> cn.cn_namelen =3D 2;
> cn.cn_consume =3D 0;
>
> /* At this point, lvp is locked */
> =3D=3D=3D> error =3D VOP_LOOKUP(lvp, uvpp, &cn);
> vput(lvp);
>
>
> So we have two layerfs vnodes with the same lower vnode:
> 1) (upper 0x9436ef20 lower 0x92314df8)
> 2) (upper 0x9246d850 lower 0x92314df8).
>
> The first node gets cleaned from vdrain_thread -> cleanvnode -> vclean =
> and
> here vclean wants to lock it.
>
> The second node is the "foundobj" from thread 0x95768060 (25124.1 make),
> currently referenced and locked.
>
> --
> J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
>
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Thu, 29 Oct 2015 19:52:39 -0700
On 10/29/15 5:20 PM, Konrad Schroder wrote:
> Forgive me for a possibly impertinent question, but what is the output
> of "mount" on this system?
>
>
jetson1:riz ~> mount
/dev/dk10 on / type ffs (local)
/dev/ld1e on /boot type msdos (local)
kernfs on /kern type kernfs (local)
ptyfs on /dev/pts type ptyfs (local)
procfs on /proc type procfs (local)
tmpfs on /var/shm type tmpfs (local)
/dev/dk14 on /bulk-data type ffs (log, local)
/dev/dk13 on /packages type ffs (local)
vidnas:/mnt/tank/netbsd/pkg/distfiles on /distfiles type nfs
/dev/dk12 on /bulk-scratch type ffs (asynchronous, noatime, local)
/dev/pts on /bulk-scratch/r1/dev/pts type null (local)
/bulk-data on /bulk-scratch/r1/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/r1/bulk-data/packages type null (local)
/distfiles on /bulk-scratch/r1/bulk-data/distfiles type null
procfs on /bulk-scratch/r1/proc type procfs (local)
/dev/pts on /bulk-scratch/r2/dev/pts type null (local)
/bulk-data on /bulk-scratch/r2/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/r2/bulk-data/packages type null (local)
/distfiles on /bulk-scratch/r2/bulk-data/distfiles type null
procfs on /bulk-scratch/r2/proc type procfs (local)
/dev/pts on /bulk-scratch/r3/dev/pts type null (local)
/bulk-data on /bulk-scratch/r3/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/r3/bulk-data/packages type null (local)
/distfiles on /bulk-scratch/r3/bulk-data/distfiles type null
procfs on /bulk-scratch/r3/proc type procfs (local)
/dev/pts on /bulk-scratch/r4/dev/pts type null (local)
/bulk-data on /bulk-scratch/r4/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/r4/bulk-data/packages type null (local)
/distfiles on /bulk-scratch/r4/bulk-data/distfiles type null
procfs on /bulk-scratch/r4/proc type procfs (local)
/dev/pts on /bulk-scratch/r5/dev/pts type null (local)
/bulk-data on /bulk-scratch/r5/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/r5/bulk-data/packages type null (local)
/distfiles on /bulk-scratch/r5/bulk-data/distfiles type null
procfs on /bulk-scratch/r5/proc type procfs (local)
/dev/pts on /bulk-scratch/r6/dev/pts type null (local)
/bulk-data on /bulk-scratch/r6/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/r6/bulk-data/packages type null (local)
/distfiles on /bulk-scratch/r6/bulk-data/distfiles type null
procfs on /bulk-scratch/r6/proc type procfs (local)
/dev/pts on /bulk-scratch/master/dev/pts type null (local)
/bulk-data on /bulk-scratch/master/bulk-data type null (local)
/packages/earmv7hf on /bulk-scratch/master/bulk-data/packages type null
(local)
/distfiles on /bulk-scratch/master/bulk-data/distfiles type null
procfs on /bulk-scratch/master/proc type procfs (local)
jetson1:riz ~>
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Konrad Schroder <perseant@hhhh.org>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Fri, 30 Oct 2015 09:55:12 +0100
> A quick read of hannken's analysis makes me think that the basic problem
> here is that the two imposed locking orders (parent directory before
> subdirectory, and upper before lower) are in direct conflict when a
> directory is null-mounted onto a subdirectory of itself.
Sure, loops would deadlock very fast.
For the deadlock described here it is sufficient to mount a path on
two different null mounts like:
dev on /data type ffs
/nulla on /data type null
/nullb on /data type null
The deadlock arises from one thread cleaning a node from /nulla while
another thread tries to vget a node on /nullb and both nodes point to
the same node on /data.
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Jeff Rizzo <riz@NetBSD.org>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Fri, 30 Oct 2015 17:29:10 +0100
--Apple-Mail=_8B134FE5-D1A3-411B-9E18-F9AA4C1811B0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
charset=us-ascii
Please try the attached patch. It will take the vnode lock before
the vnode is marked VI_CHANGING and fed to vclean().
vclean() should no longer block.
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
--Apple-Mail=_8B134FE5-D1A3-411B-9E18-F9AA4C1811B0
Content-Disposition: attachment;
filename=vfs_vnode.diff
Content-Type: application/octet-stream;
name="vfs_vnode.diff"
Content-Transfer-Encoding: 7bit
Index: vfs_vnode.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_vnode.c,v
retrieving revision 1.45
diff -p -u -2 -r1.45 vfs_vnode.c
--- vfs_vnode.c 12 Jul 2015 08:11:28 -0000 1.45
+++ vfs_vnode.c 30 Oct 2015 16:22:24 -0000
@@ -326,13 +326,15 @@ try_nextlist:
KASSERT(vp->v_freelisthd == listhd);
- if (!mutex_tryenter(vp->v_interlock))
+ if (vn_lock(vp, LK_EXCLUSIVE | LK_NOWAIT) != 0)
continue;
- if ((vp->v_iflag & VI_XLOCK) != 0) {
- mutex_exit(vp->v_interlock);
+ if (!mutex_tryenter(vp->v_interlock)) {
+ VOP_UNLOCK(vp);
continue;
}
+ KASSERT((vp->v_iflag & VI_XLOCK) == 0);
mp = vp->v_mount;
if (fstrans_start_nowait(mp, FSTRANS_SHARED) != 0) {
mutex_exit(vp->v_interlock);
+ VOP_UNLOCK(vp);
continue;
}
@@ -644,4 +646,8 @@ vrelel(vnode_t *vp, int flags)
*/
VOP_INACTIVE(vp, &recycle);
+ if (recycle) {
+ /* vclean() below will drop the lock. */
+ vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
+ }
mutex_enter(vp->v_interlock);
if (!recycle) {
@@ -868,4 +874,5 @@ holdrelel(vnode_t *vp)
* Disassociate the underlying file system from a vnode.
*
+ * Must be called with vnode locked and will return unlocked.
* Must be called with the interlock held, and will return with it held.
*/
@@ -877,4 +884,6 @@ vclean(vnode_t *vp)
int error;
+ KASSERT((vp->v_vflag & VV_LOCKSWORK) == 0 ||
+ VOP_ISLOCKED(vp) == LK_EXCLUSIVE);
KASSERT(mutex_owned(vp->v_interlock));
KASSERT((vp->v_iflag & VI_MARKER) == 0);
@@ -883,17 +892,9 @@ vclean(vnode_t *vp)
/* If already clean, nothing to do. */
if ((vp->v_iflag & VI_CLEAN) != 0) {
+ VOP_UNLOCK(vp);
return;
}
active = (vp->v_usecount > 1);
- mutex_exit(vp->v_interlock);
-
- vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
-
- /*
- * Prevent the vnode from being recycled or brought into use
- * while we clean it out.
- */
- mutex_enter(vp->v_interlock);
KASSERT((vp->v_iflag & (VI_XLOCK | VI_CLEAN)) == 0);
vp->v_iflag |= VI_XLOCK;
@@ -973,4 +974,7 @@ vrecycle(vnode_t *vp)
{
+ if (vn_lock(vp, LK_EXCLUSIVE) != 0)
+ return false;
+
mutex_enter(vp->v_interlock);
@@ -979,4 +983,5 @@ vrecycle(vnode_t *vp)
if (vp->v_usecount != 1) {
mutex_exit(vp->v_interlock);
+ VOP_UNLOCK(vp);
return false;
}
@@ -985,9 +990,8 @@ vrecycle(vnode_t *vp)
if (vp->v_usecount != 1) {
mutex_exit(vp->v_interlock);
+ VOP_UNLOCK(vp);
return false;
- } else if ((vp->v_iflag & VI_CLEAN) != 0) {
- mutex_exit(vp->v_interlock);
- return true;
}
+ KASSERT((vp->v_iflag & VI_CLEAN) == 0);
vp->v_iflag |= VI_CHANGING;
vclean(vp);
@@ -1037,4 +1041,9 @@ vgone(vnode_t *vp)
{
+ if (vn_lock(vp, LK_EXCLUSIVE) != 0) {
+ KASSERT((vp->v_iflag & VI_CLEAN) != 0);
+ vrele(vp);
+ }
+
mutex_enter(vp->v_interlock);
if ((vp->v_iflag & VI_CHANGING) != 0)
--Apple-Mail=_8B134FE5-D1A3-411B-9E18-F9AA4C1811B0--
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Sat, 31 Oct 2015 09:43:04 -0700
On 10/30/15 9:29 AM, J. Hannken-Illjes wrote:
> Please try the attached patch. It will take the vnode lock before
> the vnode is marked VI_CHANGING and fed to vclean().
>
> vclean() should no longer block.
>
> --
> J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
I have been running my pbulk build for about 18 hours, with no deadlock
so far. Not the longest I've ever gone, but certainly the longest I've
gone for a while.
I'll report back in a few days, but this looks really good so far. Thanks!
From: Jeff Rizzo <riz@NetBSD.org>
To: gnats-bugs@NetBSD.org, hannken@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Mon, 9 Nov 2015 12:13:47 -0800
The primary box I'm working stayed alive for just over a week with this
patch before crashing in an unrelated way. However, a different arm box
running a kernel with the patch from this PR (also pbulk building), just
went into livelock. Unfortunately, netbsd.gdb appears to be the wrong
kernel, so I'm not able to get anything from gdb in this case. :(
Here's what I did gather (bt/a from all LWPs, 'show lock' on the
wchans). It's not clear to me whether this is also in layerfs, though
it would make sense.
armbulk1# ps -axl -oladdr |grep tstile
12 4849 4947 0 117 0 9808 2696 tstile D ? 0:00.45 qmgr -l 926b63c0
1001 17766 17302 35156 109 0 31544 18688 tstile D ? 0:11.60 /usr/li 926b6100
1001 19955 14589 35156 109 0 23192 10640 tstile D ? 0:03.51 /usr/li 91bb9980
1001 25222 23625 35156 109 0 27416 15400 tstile D ? 0:10.25 /usr/li 91b72ba0
1001 26028 10655 35156 109 0 26392 14304 tstile D ? 0:07.68 /usr/li 94f54c60
Stopped in pid 0.23 (system) at netbsd:cpu_Debugger+0x4: bx r14
db{2}> bt/a 926b63c0
trace: pid 4849 lid 1 at 0x9dcabb9c
0x9dcabb9c: netbsd:mi_switch+0x10
0x9dcabbcc: netbsd:sleepq_block+0xb4
0x9dcabc0c: netbsd:turnstile_block+0x318
0x9dcabc84: netbsd:rw_enter+0x3c0
0x9dcabcb4: netbsd:genfs_lock+0x68
0x9dcabcdc: netbsd:VOP_LOCK+0x40
0x9dcabd04: netbsd:vn_lock+0x88
0x9dcabe64: netbsd:getcwd_common+0x364
0x9dcabeb4: netbsd:dostatvfs+0xcc
0x9dcabee4: netbsd:do_sys_fstatvfs+0x58
0x9dcabf04: netbsd:sys_fstatvfs1+0x38
0x9dcabf7c: netbsd:syscall+0xb8
0x9dcabfac: netbsd:swi_handler+0xa0
db{2}> bt/a 926b6100
trace: pid 17766 lid 1 at 0x9c5f3914
0x9c5f3914: netbsd:mi_switch+0x10
0x9c5f3944: netbsd:sleepq_block+0xb4
0x9c5f3984: netbsd:turnstile_block+0x318
0x9c5f39fc: netbsd:rw_enter+0x3c0
0x9c5f3a2c: netbsd:genfs_lock+0x68
0x9c5f3a54: netbsd:VOP_LOCK+0x40
0x9c5f3a7c: netbsd:vn_lock+0x88
0x9c5f3bdc: netbsd:getcwd_common+0x364
0x9c5f3bfc: netbsd:vn_isunder+0x2c
0x9c5f3c4c: netbsd:lookup_once+0xfc
0x9c5f3d1c: netbsd:namei_tryemulroot+0x528
0x9c5f3d54: netbsd:namei+0x34
0x9c5f3e2c: netbsd:vn_open+0x94
0x9c5f3eac: netbsd:do_open+0xb0
0x9c5f3edc: netbsd:do_sys_openat+0x7c
0x9c5f3f04: netbsd:sys_open+0x38
0x9c5f3f7c: netbsd:syscall+0xb8
0x9c5f3fac: netbsd:swi_handler+0xa0
db{2}> bt/a 91bb9980
trace: pid 19955 lid 1 at 0x9c5f5914
0x9c5f5914: netbsd:mi_switch+0x10
0x9c5f5944: netbsd:sleepq_block+0xb4
0x9c5f5984: netbsd:turnstile_block+0x318
0x9c5f59fc: netbsd:rw_enter+0x3c0
0x9c5f5a2c: netbsd:genfs_lock+0x68
0x9c5f5a54: netbsd:VOP_LOCK+0x40
0x9c5f5a7c: netbsd:vn_lock+0x88
0x9c5f5bdc: netbsd:getcwd_common+0x364
0x9c5f5bfc: netbsd:vn_isunder+0x2c
0x9c5f5c4c: netbsd:lookup_once+0xfc
0x9c5f5d1c: netbsd:namei_tryemulroot+0x528
0x9c5f5d54: netbsd:namei+0x34
0x9c5f5e2c: netbsd:vn_open+0x94
0x9c5f5eac: netbsd:do_open+0xb0
0x9c5f5edc: netbsd:do_sys_openat+0x7c
0x9c5f5f04: netbsd:sys_open+0x38
0x9c5f5f7c: netbsd:syscall+0xb8
0x9c5f5fac: netbsd:swi_handler+0xa0
db{2}> bt/a 91b72ba0
trace: pid 25222 lid 1 at 0x9c6ab914
0x9c6ab914: netbsd:mi_switch+0x10
0x9c6ab944: netbsd:sleepq_block+0xb4
0x9c6ab984: netbsd:turnstile_block+0x318
0x9c6ab9fc: netbsd:rw_enter+0x3c0
0x9c6aba2c: netbsd:genfs_lock+0x68
0x9c6aba54: netbsd:VOP_LOCK+0x40
0x9c6aba7c: netbsd:vn_lock+0x88
0x9c6abbdc: netbsd:getcwd_common+0x364
0x9c6abbfc: netbsd:vn_isunder+0x2c
0x9c6abc4c: netbsd:lookup_once+0xfc
0x9c6abd1c: netbsd:namei_tryemulroot+0x528
0x9c6abd54: netbsd:namei+0x34
0x9c6abe2c: netbsd:vn_open+0x94
0x9c6abeac: netbsd:do_open+0xb0
0x9c6abedc: netbsd:do_sys_openat+0x7c
0x9c6abf04: netbsd:sys_open+0x38
0x9c6abf7c: netbsd:syscall+0xb8
0x9c6abfac: netbsd:swi_handler+0xa0
db{2}> bt/a 94f54c60
trace: pid 26028 lid 1 at 0x9c2e1914
0x9c2e1914: netbsd:mi_switch+0x10
0x9c2e1944: netbsd:sleepq_block+0xb4
0x9c2e1984: netbsd:turnstile_block+0x318
0x9c2e19fc: netbsd:rw_enter+0x3c0
0x9c2e1a2c: netbsd:genfs_lock+0x68
0x9c2e1a54: netbsd:VOP_LOCK+0x40
0x9c2e1a7c: netbsd:vn_lock+0x88
0x9c2e1bdc: netbsd:getcwd_common+0x364
0x9c2e1bfc: netbsd:vn_isunder+0x2c
0x9c2e1c4c: netbsd:lookup_once+0xfc
0x9c2e1d1c: netbsd:namei_tryemulroot+0x528
0x9c2e1d54: netbsd:namei+0x34
0x9c2e1e2c: netbsd:vn_open+0x94
0x9c2e1eac: netbsd:do_open+0xb0
0x9c2e1edc: netbsd:do_sys_openat+0x7c
0x9c2e1f04: netbsd:sys_open+0x38
0x9c2e1f7c: netbsd:syscall+0xb8
0x9c2e1fac: netbsd:swi_handler+0xa0
ps/w:
19955 1 cc1plus netbsd 26 tstile 92ffc4d4
26028 1 cc1plus netbsd 26 tstile 92ffc4d4
25222 1 cc1plus netbsd 26 tstile 92ffc4d4
17766 1 cc1plus netbsd 26 tstile 92ffc4d4
4849 1 qmgr netbsd 43 tstile 926cbb04
db{2}> show lock 92ffc4d4
lock address : 0x0000000092ffc4d4 type : sleep/adaptive
initialized : 0x0000000081365434
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 4
current cpu : 2 last held: 1
current lwp : 0x00000000915aa020 last held: 0x0000000096ddc9e0
last locked* : 0x000000008137a71c unlocked : 0x000000008137a838
owner/count : 0x0000000096ddc9e0 flags : 0x0000000000000007
Turnstile chain at 0x815eaed0.
=> Turnstile at 0x951e5230 (wrq=0x951e5240, rdq=0x951e5248).
=> 0 waiting readers:
=> 4 waiting writers: 0x91bb9980 0x94f54c60 0x91b72ba0 0x926b6100
db{2}> show lock 926cbb04
lock address : 0x00000000926cbb04 type : sleep/adaptive
initialized : 0x0000000081365434
shared holds : 0 exclusive: 1
shares wanted: 0 exclusive: 1
current cpu : 2 last held: 1
current lwp : 0x00000000915aa020 last held: 0x0000000096ddd4e0
last locked* : 0x000000008137a71c unlocked : 0x000000008137a838
owner/count : 0x0000000096ddd4e0 flags : 0x0000000000000007
Turnstile chain at 0x815eaf00.
=> Turnstile at 0x9159acb0 (wrq=0x9159acc0, rdq=0x9159acc8).
=> 0 waiting readers:
=> 1 waiting writers: 0x926b63c0
db{2}> bt/a 0x0000000096ddc9e0
trace: pid 4848 lid 1 at 0x9c53b5c4
0x9c53b5c4: netbsd:mi_switch+0x10
0x9c53b5f4: netbsd:sleepq_block+0xb4
0x9c53b62c: netbsd:cv_wait+0x130
0x9c53b6b4: netbsd:vmem_xalloc+0x504
0x9c53b6f4: netbsd:vmem_alloc+0xe8
0x9c53b77c: netbsd:vmem_xalloc+0x6a8
0x9c53b7bc: netbsd:vmem_alloc+0xe8
0x9c53b7ec: netbsd:qc_poolpage_alloc+0x54
0x9c53b82c: netbsd:pool_grow+0x38
0x9c53b864: netbsd:pool_get+0x80
0x9c53b8ac: netbsd:pool_cache_get_slow+0x224
0x9c53b8e4: netbsd:pool_cache_get_paddr+0x22c
0x9c53b924: netbsd:vmem_alloc+0x90
0x9c53b974: netbsd:uvm_km_kmem_alloc+0x38
0x9c53b98c: netbsd:pool_page_alloc+0x3c
0x9c53b9cc: netbsd:pool_grow+0x38
0x9c53ba04: netbsd:pool_get+0x80
0x9c53ba4c: netbsd:pool_cache_get_slow+0x224
0x9c53ba84: netbsd:pool_cache_get_paddr+0x22c
0x9c53baa4: netbsd:vnalloc+0x2c
0x9c53bb0c: netbsd:vcache_get+0x224
0x9c53bbc4: netbsd:ufs_lookup+0x858
0x9c53bbfc: netbsd:VOP_LOOKUP+0x48
0x9c53bc4c: netbsd:lookup_once+0x19c
0x9c53bd1c: netbsd:namei_tryemulroot+0x528
0x9c53bd54: netbsd:namei+0x34
0x9c53be2c: netbsd:vn_open+0x94
0x9c53beac: netbsd:do_open+0xb0
0x9c53bedc: netbsd:do_sys_openat+0x7c
0x9c53bf04: netbsd:sys_open+0x38
0x9c53bf7c: netbsd:syscall+0xb8
0x9c53bfac: netbsd:swi_handler+0xa0
db{2}> bt/a 0x0000000096ddd4e0
trace: pid 7394 lid 1 at 0x9c471724
0x9c471724: netbsd:mi_switch+0x10
0x9c471754: netbsd:sleepq_block+0xb4
0x9c47178c: netbsd:cv_wait+0x130
0x9c471814: netbsd:vmem_xalloc+0x504
0x9c471854: netbsd:vmem_alloc+0xe8
0x9c4718dc: netbsd:vmem_xalloc+0x6a8
0x9c47191c: netbsd:vmem_alloc+0xe8
0x9c47194c: netbsd:qc_poolpage_alloc+0x54
0x9c47198c: netbsd:pool_grow+0x38
0x9c4719c4: netbsd:pool_get+0x80
0x9c471a0c: netbsd:pool_cache_get_slow+0x224
0x9c471a44: netbsd:pool_cache_get_paddr+0x22c
0x9c471a84: netbsd:vmem_alloc+0x90
0x9c471ad4: netbsd:uvm_km_kmem_alloc+0x38
0x9c471aec: netbsd:pool_page_alloc+0x3c
0x9c471b2c: netbsd:pool_grow+0x38
0x9c471b64: netbsd:pool_get+0x80
0x9c471bac: netbsd:pool_cache_get_slow+0x224
0x9c471be4: netbsd:pool_cache_get_paddr+0x22c
0x9c471c2c: netbsd:kmem_intr_alloc+0x7c
0x9c471ccc: netbsd:ufs_readdir+0x12c
0x9c471d04: netbsd:VOP_READDIR+0x58
0x9c471e64: netbsd:getcwd_common+0x428
0x9c471eb4: netbsd:dostatvfs+0xcc
0x9c471ee4: netbsd:do_sys_fstatvfs+0x58
0x9c471f04: netbsd:sys_fstatvfs1+0x38
0x9c471f7c: netbsd:syscall+0xb8
0x9c471fac: netbsd:swi_handler+0xa0
db{2}>
From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
To: gnats-bugs@NetBSD.org
Cc: Jeff Rizzo <riz@NetBSD.org>
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Tue, 10 Nov 2015 11:18:56 +0100
This deadlock looks like vmem exhaustion:
1) Threads 19955, 26028, 25222 and 17766 wait for the vnode
lock 0x92ffc4d4.
This lock is held by thread 0x96ddc9e0 waiting in vmem_xalloc
for vmem to become available.
2) Thread 4849 waits for vnode lock 0x926cbb04.
This lock is held by thread 0x96ddd4e0 also waiting
in vmem_xalloc for vmem to become available.
Has nothing to do with layerfs, no "VOP_LOCK > layer_lock > VOP_LOCK"
sequences in the backtrace of the blocked threads.
Could we conclude the patch fixes your issue with layerfs deadlock?
--
J. Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
From: Jeff Rizzo <riz@NetBSD.org>
To: gnats-bugs@NetBSD.org, hannken@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Tue, 10 Nov 2015 09:12:43 -0800
On 11/10/15 02:20 AM, J. Hannken-Illjes wrote:
> The following reply was made to PR kern/50375; it has been noted by GNATS.
>
> From: "J. Hannken-Illjes" <hannken@eis.cs.tu-bs.de>
> To: gnats-bugs@NetBSD.org
> Cc: Jeff Rizzo <riz@NetBSD.org>
> Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
> Date: Tue, 10 Nov 2015 11:18:56 +0100
>
> This deadlock looks like vmem exhaustion:
>
> 1) Threads 19955, 26028, 25222 and 17766 wait for the vnode
> lock 0x92ffc4d4.
>
> This lock is held by thread 0x96ddc9e0 waiting in vmem_xalloc
> for vmem to become available.
>
> 2) Thread 4849 waits for vnode lock 0x926cbb04.
>
> This lock is held by thread 0x96ddd4e0 also waiting
> in vmem_xalloc for vmem to become available.
>
> Has nothing to do with layerfs, no "VOP_LOCK > layer_lock > VOP_LOCK"
> sequences in the backtrace of the blocked threads.
>
> Could we conclude the patch fixes your issue with layerfs deadlock?
>
>
I agree, the issue in this PR, 50375, appears to be fixed. Thank you
very much for your help.
I do, however, have two hosts currently exhibiting the vmem exhaustion
problem, above. I will troubleshoot that and open a new PR if appropriate.
+j
From: Jeff Rizzo <riz@tastylime.net>
To: gnats-bugs@NetBSD.org, hannken@NetBSD.org
Cc:
Subject: Re: kern/50375: layerfs (nullfs) locking problem leading to livelock
Date: Tue, 10 Nov 2015 12:14:53 -0800
On 11/10/15 09:12 AM, Jeff Rizzo wrote:
>
>
> I agree, the issue in this PR, 50375, appears to be fixed. Thank you
> very much for your help.
>
> I do, however, have two hosts currently exhibiting the vmem exhaustion
> problem, above. I will troubleshoot that and open a new PR if
> appropriate.
>
Chatting with others has led me to the conclusion that arm
NKMEMPAGES_MAX_DEFAULT is too low at 128MB for this particular use with
LOCKDEBUG.
kern/50375 looks good to go - thanks again.
From: "Juergen Hannken-Illjes" <hannken@netbsd.org>
To: gnats-bugs@gnats.NetBSD.org
Cc:
Subject: PR/50375 CVS commit: src/sys/kern
Date: Thu, 12 Nov 2015 11:35:42 +0000
Module Name: src
Committed By: hannken
Date: Thu Nov 12 11:35:42 UTC 2015
Modified Files:
src/sys/kern: vfs_vnode.c
Log Message:
Take the vnode lock before the vnode is marked VI_CHANGING and fed
to vclean(). Prevents a deadlock with two null mounts on the same
physical mount where one thread tries to vclean() a layer node and
another thread tries to vget() a layer node pointing to the same
physical node.
Fixes PR kern/50375 layerfs (nullfs) locking problem leading to livelock
To generate a diff of this commit:
cvs rdiff -u -r1.45 -r1.46 src/sys/kern/vfs_vnode.c
Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.
State-Changed-From-To: analyzed->pending-pullups
State-Changed-By: hannken@NetBSD.org
State-Changed-When: Tue, 05 Jan 2016 11:24:29 +0000
State-Changed-Why:
Pullup requested with ticket #1070.
From: "Soren Jacobsen" <snj@netbsd.org>
To: gnats-bugs@gnats.NetBSD.org
Cc:
Subject: PR/50375 CVS commit: [netbsd-7] src/sys/kern
Date: Tue, 26 Jan 2016 23:43:34 +0000
Module Name: src
Committed By: snj
Date: Tue Jan 26 23:43:34 UTC 2016
Modified Files:
src/sys/kern [netbsd-7]: vfs_vnode.c
Log Message:
Pull up following revision(s) (requested by hannken in ticket #1070):
sys/kern/vfs_vnode.c: revision 1.46 via patch
Take the vnode lock before the vnode is marked VI_CHANGING and fed
to vclean(). Prevents a deadlock with two null mounts on the same
physical mount where one thread tries to vclean() a layer node and
another thread tries to vget() a layer node pointing to the same
physical node.
Fixes PR kern/50375 layerfs (nullfs) locking problem leading to livelock
To generate a diff of this commit:
cvs rdiff -u -r1.37.2.1 -r1.37.2.2 src/sys/kern/vfs_vnode.c
Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.
From: "Soren Jacobsen" <snj@netbsd.org>
To: gnats-bugs@gnats.NetBSD.org
Cc:
Subject: PR/50375 CVS commit: [netbsd-7-0] src/sys/kern
Date: Tue, 26 Jan 2016 23:44:12 +0000
Module Name: src
Committed By: snj
Date: Tue Jan 26 23:44:11 UTC 2016
Modified Files:
src/sys/kern [netbsd-7-0]: vfs_vnode.c
Log Message:
Pull up following revision(s) (requested by hannken in ticket #1070):
sys/kern/vfs_vnode.c: revision 1.46 via patch
Take the vnode lock before the vnode is marked VI_CHANGING and fed
to vclean(). Prevents a deadlock with two null mounts on the same
physical mount where one thread tries to vclean() a layer node and
another thread tries to vget() a layer node pointing to the same
physical node.
Fixes PR kern/50375 layerfs (nullfs) locking problem leading to livelock
To generate a diff of this commit:
cvs rdiff -u -r1.37.2.1 -r1.37.2.1.2.1 src/sys/kern/vfs_vnode.c
Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.
State-Changed-From-To: pending-pullups->closed
State-Changed-By: hannken@NetBSD.org
State-Changed-When: Wed, 27 Jan 2016 08:55:09 +0000
State-Changed-Why:
Pullups done.
>Unformatted:
(Contact us)
$NetBSD: query-full-pr,v 1.39 2013/11/01 18:47:49 spz Exp $
$NetBSD: gnats_config.sh,v 1.8 2006/05/07 09:23:38 tsutsui Exp $
Copyright © 1994-2014
The NetBSD Foundation, Inc. ALL RIGHTS RESERVED.