NetBSD Problem Report #13837
Received: (qmail 29267 invoked from network); 31 Aug 2001 01:45:15 -0000
Message-Id: <200108310150.f7V1oCK00364@lyra.>
Date: Thu, 30 Aug 2001 20:50:12 -0500 (CDT)
From: gendalia@iastate.edu
Reply-To: gendalia@netbsd.org
To: gnats-bugs@gnats.netbsd.org
Subject: panic: lfs_nextseg: no clean segments
X-Send-Pr-Version: 3.95
>Number: 13837
>Category: kern
>Synopsis: panic: lfs_nextseg: no clean segments
>Confidential: no
>Severity: serious
>Priority: high
>Responsible: kern-bug-people
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Fri Aug 31 01:46:01 +0000 2001
>Closed-Date:
>Last-Modified: Sun Nov 22 22:24:53 +0000 2009
>Originator: Tracy Di Marco White
>Release: NetBSD 1.5.1
>Organization:
>Environment:
System: NetBSD lyra 1.5.1_BETA2 NetBSD 1.5.1_BETA2 (LYRA) #0: Sun Jun 10 18:35:24 CDT 2001 root@lyra:/usr/src/sys/arch/i386/compile/LYRA i386
Large RAID5, small partition.
>Description:
Running bonnie to do some benchmark testing, around 17:30 I hit ^C and did
an rm -r on bonnie's tmp file:
Aug 30 15:48:05 lyra lfs_cleanerd[1645]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:05 lyra /netbsd: pid 1645 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:06 lyra /netbsd: pid 1650 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:05 lyra lfs_cleanerd[1650]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:06 lyra lfs_cleanerd[1651]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:06 lyra /netbsd: pid 1651 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:06 lyra /netbsd: pid 1652 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:06 lyra lfs_cleanerd[1652]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:06 lyra lfs_cleanerd[1653]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:06 lyra /netbsd: pid 1653 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:07 lyra lfs_cleanerd[1654]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:07 lyra /netbsd: pid 1654 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:07 lyra lfs_cleanerd[1655]: mmap_segment: malloc failed: Cannot allocate memory
Aug 30 15:48:07 lyra /netbsd: pid 1655 (lfs_cleanerd), uid 0: exited on signal 11 (core dumped)
Aug 30 15:48:07 lyra lfs_cleanerd: /mnt3: cleanerd looping, exiting
Aug 30 15:48:07 lyra lfs_cleanerd: /mnt3: cleanerd looping, exiting
Aug 30 15:54:37 lyra /netbsd: lfs_fits: no fit: db = 128, uinodes = 1, needed = 225, avail = 195
Aug 30 15:54:37 lyra /netbsd: lfs_reserve: waiting for 128 (bfree = 1016030, est_bfree = 864763)
Aug 30 17:38:28 lyra /netbsd: lfs_fits: no fit: db = 32, uinodes = 0, needed = 129, avail = 66
Aug 30 17:38:28 lyra /netbsd: lfs_availwait: out of available space, waiting on cleaner
Aug 30 17:38:31 lyra /netbsd: lfs_fits: no fit: db = 192, uinodes = 2, needed = 289, avail = 34
Aug 30 17:38:31 lyra /netbsd: lfs_reserve: waiting for 192 (bfree = 1015997, est_bfree = 864691)
<panic, rebooted>
Aug 30 19:45:18 lyra savecore: reboot after panic: lfs_nextseg: no clean segments
Aug 30 19:45:18 lyra savecore: reboot after panic: lfs_nextseg: no clean segments
I have a core, bt only shows:
#0 0xc02ccd68 in db_last_command ()
#1 0x3fc1000 in ?? ()
#2 0xc02458f7 in cpu_reboot ()
#3 0xc0118759 in db_sync_cmd ()
#4 0xc0118380 in db_command ()
#5 0xc0118522 in db_command_loop ()
#6 0xc011b2b6 in db_trap ()
#7 0xc0243810 in kdb_trap ()
#8 0xc0249dcc in trap ()
#9 0xc0100d09 in calltrap ()
#10 0xc02248da in lfs_writefile ()
#11 0xc02247a1 in lfs_segwrite ()
#12 0xc02292ce in lfs_sync ()
#13 0xc01a2924 in sys_sync ()
#14 0xc01a1854 in vfs_shutdown ()
#15 0xc02458cf in cpu_reboot ()
#16 0xc01874ed in panic ()
#17 0xc0225a7d in lfs_newseg ()
#18 0xc0225736 in lfs_initseg ()
#19 0xc02264e4 in lfs_writeseg ()
#20 0xc0225061 in lfs_gatherblock ()
#21 0xc0225236 in lfs_gather ()
#22 0xc02249c8 in lfs_writefile ()
#23 0xc02247a1 in lfs_segwrite ()
#24 0xc02292ce in lfs_sync ()
#25 0xc01af597 in sync_fsync ()
#26 0xc01af2c9 in sched_sync ()
>How-To-Repeat:
fill up an LFS file system, I assume
>Fix:
>Release-Note:
>Audit-Trail:
Responsible-Changed-From-To: kern-bug-people->perseant
Responsible-Changed-By: fair
Responsible-Changed-When: Wed Sep 5 01:16:31 PDT 2001
Responsible-Changed-Why:
Konrad Schroder is our LFS expert.
From: Thomas Klausner <wiz@danbala.ifoer.tuwien.ac.at>
To: gnats-bugs@netbsd.org
Cc:
Subject: kern/13837
Date: Wed, 25 Dec 2002 17:19:06 +0100
Hi!
Yesterday I accidentally filled an LFS partition on my
1.6K/i386 from Dec 21, 2002.
The final state, as reported by df, was:
Filesystem 1024-blocks Used Avail Capacity Mounted on
/dev/sd0g 1945544 1989412 -238422 113% /usr/obj
I didn't see the first panic, since I was in X.
When I remounted the partition after rebooting, shortly afterwards
I got the following panic:
panic: lfs_nextseg: no clean segements
tr/u reported:
cpu_Debugger
panic
lfs_newseg
lfs_initseg
lfs_writeseg
lfs_gatherblock
lfs_gather
lfs_writefile
lfs_segwrite
lfs_markv
sys_lfs_markv
I was impressed that LFS managed to use more blocks than are available ;)
Thomas
--
Thomas Klausner - wiz@danbala.ifoer.tuwien.ac.at
What is wanted is not the will to believe, but the will to find
out, which is the exact opposite. -- Bertrand Russell
Responsible-Changed-From-To: perseant->kern-bug-people
Responsible-Changed-By: perseant
Responsible-Changed-When: Thu Nov 20 19:55:01 UTC 2003
Responsible-Changed-Why:
Trying to be realistic
From: Greg Oster <oster@cs.usask.ca>
To: gnats-bugs@netbsd.org
Cc:
Subject: kern/13837
Date: Thu, 04 Mar 2004 11:40:10 -0600
Just saw the same panic on a 1.6ZK box whilst extracting pkgsrc onto
a LFS:
rizzo# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/raid0a 867383 492772 331241 59% /
merlin:/home 15240680 10175952 4302694 70% /home
merlin:/u1 89270792 41290032 43517220 48% /u1
merlin:/u2 15240680 10175952 4302694 70% /u2
merlin:/u3 89270792 41290032 43517220 48% /u3
/dev/raid1e 173989 147983 8608 94% /lfs
rizzo# Mar 4 11:14:15 rizzo lfs_cleanerd[693]: clean_segment: LFCNMARKV failed: Resource temporarily unavailable
rizzo#
rizzo#
rizzo# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/raid0a 867383 492772 331241 59% /
merlin:/home 15240680 10175952 4302694 70% /home
merlin:/u1 89270792 41290032 43517220 48% /u1
merlin:/u2 15240680 10175952 4302694 70% /u2
merlin:/u3 89270792 41290032 43517220 48% /u3
/dev/raid1e 173079 182949 -27177 117% /lfs
rizzo# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/raid0a 867383 492772 331241 59% /
merlin:/home 15240680 10175952 4302694 70% /home
merlin:/u1 89270792 41290032 43517220 48% /u1
merlin:/u2 15240680 10175952 4302694 70% /u2
merlin:/u3 89270792 41290032 43517220 48% /u3
/dev/raid1e 173079 182949 -27177 117% /lfs
rizzo# df
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/raid0a 867383 492772 331241 59% /
merlin:/home 15240680 10175952 4302694 70% /home
merlin:/u1 89270792 41290032 43517220 48% /u1
merlin:/u2 15240680 10175952 4302694 70% /u2
merlin:/u3 89270792 41290032 43517220 48% /u3
/dev/raid1e 173051 183045 -27299 117% /lfs
rizzo# panic: lfs_nextseg: no clean segments
Stopped in pid 693.1 (lfs_cleanerd) at netbsd:cpu_Debugger+0x4: leave
db> tr
cpu_Debugger(c5154258,c0c6a800,c4fc282c,1b,c5154270) at netbsd:cpu_Debugger+0x4
panic(c07035c0,c4fc281c,c06bf0d2,0,c0c6a840) at netbsd:panic+0x11d
lfs_newseg(c0c6a800,c0e7f998,c4fc286c,c03951a4,c4fc2854) at netbsd:lfs_newseg+0x
342
lfs_initseg(c0c6a800,c0e7f998,7d3,2,2000) at netbsd:lfs_initseg+0x29d
lfs_writeseg(c0c6a800,c50e7000,c06bfe15,c0d07514,1) at netbsd:lfs_writeseg+0x36e
lfs_gatherblock(c50e7000,c1598d04,c4fc291c,1,c0c6a800) at netbsd:lfs_gatherblock
+0x83
lfs_gather(c0c6a800,c50e7000,c50da518,c0312ecc,c0c6a800) at netbsd:lfs_gather+0x
f3
lfs_writefile(c0c6a800,c50e7000,c50da518,c51551e8,400) at netbsd:lfs_writefile+0
x1ef
lfs_segwrite(c0ce7600,5,628,c0c6a800,c07061c0) at netbsd:lfs_segwrite+0x3cb
lfs_vflush(c5c34854,c0c6a800,c5c357ec,c5c34854,c02ce918) at netbsd:lfs_vflush+0x
d1
lfs_update(c4fc2a64,c5c34854,0,0,c05ade00) at netbsd:lfs_update+0x2b7
VOP_UPDATE(c5c34854,0,0,1,c5c348dc) at netbsd:VOP_UPDATE+0x34
lfs_fsync(c4fc2ad4,0,c4fc2b0c,c03954ec,c05ad740) at netbsd:lfs_fsync+0xbd
VOP_FSYNC(c5c34854,ffffffff,5,0,0) at netbsd:VOP_FSYNC+0x4c
vinvalbuf(c5c34854,1,ffffffff,c4d5d008,0) at netbsd:vinvalbuf+0x24c
vclean(c5c34854,8,c4d5d008,0,0) at netbsd:vclean+0x228
vgonel(c5c34854,c4d5d008,10e,c074a7c8,c07abfa0) at netbsd:vgonel+0x46
getcleanvnode(c4d5d008,c06bfff7,226,0,120c) at netbsd:getcleanvnode+0x105
getnewvnode(5,c0ce7600,c0b9f000,c4fc2c58,c18eef38) at netbsd:getnewvnode+0xdc
lfs_fastvget(c0ce7600,3d86,8faa,0,c4fc2d14) at netbsd:lfs_fastvget+0x97
lfs_markv(c4d5d008,c0ce769c,c12ae000,117d,c50da518) at netbsd:lfs_markv+0x276
lfs_fcntl(c4fc2d94,0,1000,0,c05ad600) at netbsd:lfs_fcntl+0x21f
VOP_FCNTL(c50da518,b008cc03,c4fc2e24,1,c0ca8400) at netbsd:VOP_FCNTL+0x40
vn_fcntl(c4d5cc30,b008cc03,c4fc2e24,c4d5d008,1) at netbsd:vn_fcntl+0x34
fcntl_forfs(5,c4d5d008,b008cc03,bfbff9d0,c4cfd298) at netbsd:fcntl_forfs+0x92
sys_fcntl(c4d00840,c4fc2f64,c4fc2f5c,0,c03fd9c3) at netbsd:sys_fcntl+0x504
syscall_plain(c4fc2fa8,bfbf001f,bfbf001f,806001f,bfbf001f) at netbsd:syscall_pla
in+0x7e
db>
db> show all proc
PID PPID PGRP UID S FLAGS LWPS COMMAND WAIT
424 338 416 0 2 0x5002 1 tar lfs seg
>693 662 662 0 2 0 1 lfs_cleanerd
662 1 662 0 2 0 1 lfs_cleanerd wait
467 0 0 0 2 0x20200 1 raidio1
692 0 0 0 2 0x20200 1 raid1 rfwcond
338 1 338 0 2 0x4002 1 csh ttyin
176 1 176 0 2 0 1 cron nanosle
169 1 169 0 2 0 1 inetd select
378 341 341 0 2 0 1 nfsd nfsd
318 341 341 0 2 0 1 nfsd nfsd
369 341 341 0 2 0 1 nfsd nfsd
374 341 341 0 2 0 1 nfsd nfsd
341 1 341 0 2 0 1 nfsd select
311 1 311 0 2 0 1 mountd select
248 0 0 0 2 0x20200 1 nfsio nfsidl
243 0 0 0 2 0x20200 1 nfsio nfsidl
156 0 0 0 2 0x20200 1 nfsio nfsidl
242 0 0 0 2 0x20200 1 nfsio nfsidl
217 1 217 0 2 0 1 rpcbind poll
189 1 189 0 2 0 1 syslogd poll
15 0 0 0 2 0x20200 1 aiodoned aiodone
14 0 0 0 2 0x20200 1 ioflush syncer
13 0 0 0 2 0x20200 1 pagedaemon pgdaemo
12 0 0 0 2 0x20200 1 lfs_writer lfswrit
11 0 0 0 2 0x20200 1 raidio0 raidiow
10 0 0 0 2 0x20200 1 raid0 rfwcond
9 0 0 0 2 0x20200 1 scsibus2 sccomp
8 0 0 0 2 0x20200 1 scsibus1 sccomp
7 0 0 0 2 0x20200 1 scsibus0 sccomp
6 0 0 0 2 0x20200 1 usbtask usbtsk
5 0 0 0 2 0x20200 1 usb0 usbevt
4 0 0 0 2 0x20200 1 atabus1 atath
3 0 0 0 2 0x20200 1 atabus0 atath
2 0 0 0 2 0x20200 1 cryptoret crypto_
1 0 1 0 2 0x4000 1 init wait
0 -1 0 0 2 0x20200 1 swapper schedul
db>
db> show uvmexp
Current UVM status:
pagesize=4096 (0x1000), pagemask=0xfff, pageshift=12
30101 VM pages: 13989 active, 7149 inactive, 147 wired, 1252 free
min 10% (25) anon, 10% (25) file, 5% (12) exec
max 80% (204) anon, 50% (128) file, 30% (76) exec
pages 7989 anon, 18221 file, 507 exec
freemin=64, free-target=85, inactive-target=7149, wired-max=10033
faults=161684, traps=166018, intrs=985762, ctxswitch=193314
softint=845750, syscalls=659214, swapins=0, swapouts=0
fault counts:
noram=0, noanon=0, pgwait=0, pgrele=0
ok relocks(total)=632(632), anget(retrys)=101296(0), amapcopy=2537
neighbor anon/obj pg=4265/19167, gets(lock/unlock)=6002/632
cases: anon=51216, anoncow=3088, obj=5463, prcopy=539, przero=29202
daemon and swap counts:
woke=100, revs=100, scans=151392, obscans=64595, anscans=0
busy=0, freed=0, reactivate=78564, deactivate=229122
pageouts=0, pending=0, nswget=0
nswapdev=0, nanon=28200, nanonneeded=28200 nfreeanon=25788
swpages=0, swpginuse=0, swpgonly=0 paging=0
db>
Can probably reproduce this one at will....
Later...
Greg Oster
Responsible-Changed-From-To: kern-bug-people->go
Responsible-Changed-By: tls
Responsible-Changed-When: Wed Mar 31 07:07:01 UTC 2004
Responsible-Changed-Why:
I think Greg may actually have _fixed_ this by now.
Responsible-Changed-From-To: go->oster
Responsible-Changed-By: tls
Responsible-Changed-When: Wed Mar 31 07:14:12 UTC 2004
Responsible-Changed-Why:
again, get Greg's email address right. :-)
Responsible-Changed-From-To: oster->kern-bug-people
Responsible-Changed-By: oster@narn.netbsd.org
Responsible-Changed-When: Fri, 25 Jan 2008 16:26:42 +0000
Responsible-Changed-Why:
I havn't been looking at LFS bits in ages.
State-Changed-From-To: open->closed
State-Changed-By: dsl@NetBSD.org
State-Changed-When: Sun, 22 Nov 2009 21:35:42 +0000
State-Changed-Why:
I lot of fixes were done to LFS after 2004.
It has now been depracated.
State-Changed-From-To: closed->open
State-Changed-By: dholland@NetBSD.org
State-Changed-When: Sun, 22 Nov 2009 22:24:53 +0000
State-Changed-Why:
lfs is not deprecated yet, you're thinking of softupdates
>Unformatted:
(Contact us)
$NetBSD: query-full-pr,v 1.39 2013/11/01 18:47:49 spz Exp $
$NetBSD: gnats_config.sh,v 1.8 2006/05/07 09:23:38 tsutsui Exp $
Copyright © 1994-2007
The NetBSD Foundation, Inc. ALL RIGHTS RESERVED.