NetBSD Problem Report #54988
From mlh@goathill.org Wed Feb 19 18:26:12 2020
Return-Path: <mlh@goathill.org>
Received: from mail.netbsd.org (mail.netbsd.org [199.233.217.200])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(Client CN "mail.NetBSD.org", Issuer "mail.NetBSD.org CA" (not verified))
by mollari.NetBSD.org (Postfix) with ESMTPS id C8F621A9213
for <gnats-bugs@gnats.NetBSD.org>; Wed, 19 Feb 2020 18:26:12 +0000 (UTC)
Message-Id: <20200219182610.1C58F12C47@chopper.goathill.org>
Date: Wed, 19 Feb 2020 13:26:10 -0500 (EST)
From: mlh@goathill.org
Reply-To: mlh@goathill.org
To: gnats-bugs@NetBSD.org
Subject: possible memory leaks/swap problems
X-Send-Pr-Version: 3.95
>Number: 54988
>Category: port-amd64
>Synopsis: system freezes shortly after physical memory is exhausted.
>Confidential: no
>Severity: serious
>Priority: medium
>Responsible: ad
>State: closed
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Wed Feb 19 18:30:00 +0000 2020
>Closed-Date: Sat Aug 14 20:32:45 +0000 2021
>Last-Modified: Sat Aug 14 20:32:45 +0000 2021
>Originator: mlh
>Release: NetBSD 9.99.46 Thu Feb 13 13:00:32 EST 2020
>Organization:
none
>Environment:
System: NetBSD tiamat 9.99.46 NetBSD 9.99.46 (HDMIAUDIO) #0: Thu Feb 13 13:00:32 EST 2020 mlh@tiamat:/opt/obj/amd64/opt/src/sys/arch/amd64/compile/HDMIAUDIO amd64
Architecture: x86_64
Machine: amd64
amd64 GENERIC + options HDAUDIO_ENABLE_HDMI
>Description:
When doing simple build operations such as a csv update + building
a std distribution, physical memory is exhausted fairly quickly and
appears to not be returned and when physical mem is exhausted, the
system simply freezes as though something is wrong with swap operations.
I have tried using several different swap partitions and from different
disks but it doesn't appear to change the result. I have 4G of physical
memory and I can't even finish building a std distribution without
having to cycle power to reboot in order to contiune. I can cause the
problem to occur within a few minutes by using pkg_chk to load a list
of prebuilt binary pkgsrc packages. It freezes before even 300 packages
are loaded. It might be a problem with disk cache operations but
not sure. Sometimes phy mem appears to be exhausted by application
but it can happen when application use is only around 20% and the
remainder is disk cache, as with loading binary packages. I can stop the
task before phy mem is exhausted but I see no evidence that any
appreciable amount of memory consumed is ever returned. I can stop when
vmstat fre shows down to around 20000 and until I reboot, free memory
stays pretty much at that point. I can even shut down X and most else
and still very little memory is available.
-current as of the last couple of weeks (that I have tried) behaves this way.
Build from late Jan worked fine.
>How-To-Repeat:
$ cd /usr/src
$ cvs up
$ /usr/src/build.sh -j2 -x -r -U -u -m amd64 -D /opt/obj/amd64/build -M /opt/obj/amd64 -T /opt/obj/amd64/tools -R /opt/build/9 tools release
>Fix:
>Release-Note:
>Audit-Trail:
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Wed, 19 Feb 2020 16:56:27 -0500 (EST)
gnats-admin@netbsd.org wrote:
> Thank you very much for your problem report.
> It has the internal identification `port-amd64/54988'.
> The individual assigned to look at your
> report is: port-amd64-maintainer.
>
> >Category: port-amd64
> >Responsible: port-amd64-maintainer
> >Synopsis: system freezes shortly after physical memory is exhausted.
> >Arrival-Date: Wed Feb 19 18:30:00 +0000 2020
I noticed that the main consumers appear to be:
$ vmstat -m
Memory resource pool statistics
Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
...
ataspl 160 1264935 0 1264935 3 0 3 3 0 inf 3
...
execargs 262144 1775776 0 1775776 8 0 8 8 0 16 8
...
wapblinopl 40 2723208 0 2721719 15 0 15 15 0 inf 0
From: Joerg Sonnenberger <joerg@bec.de>
To: gnats-bugs@netbsd.org
Cc: port-amd64-maintainer@netbsd.org, gnats-admin@netbsd.org,
netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Wed, 19 Feb 2020 23:33:22 +0100
On Wed, Feb 19, 2020 at 10:00:02PM +0000, MLH wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: mlh@goathill.org (MLH)
> To: gnats-bugs@netbsd.org
> Cc: mlh@goathill.org
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Wed, 19 Feb 2020 16:56:27 -0500 (EST)
>
> gnats-admin@netbsd.org wrote:
> > Thank you very much for your problem report.
> > It has the internal identification `port-amd64/54988'.
> > The individual assigned to look at your
> > report is: port-amd64-maintainer.
> >
> > >Category: port-amd64
> > >Responsible: port-amd64-maintainer
> > >Synopsis: system freezes shortly after physical memory is exhausted.
> > >Arrival-Date: Wed Feb 19 18:30:00 +0000 2020
>
> I noticed that the main consumers appear to be:
>
> $ vmstat -m
> Memory resource pool statistics
> Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
> ...
> ataspl 160 1264935 0 1264935 3 0 3 3 0 inf 3
> ...
> execargs 262144 1775776 0 1775776 8 0 8 8 0 16 8
> ...
> wapblinopl 40 2723208 0 2721719 15 0 15 15 0 inf 0
If requests ~= releases, it is no memory leak.
Joerg
From: mlh@goathill.org (MLH)
To: Joerg Sonnenberger <joerg@bec.de>
Cc: gnats-bugs@netbsd.org, port-amd64-maintainer@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Wed, 19 Feb 2020 23:13:41 -0500 (EST)
Joerg Sonnenberger wrote:
> On Wed, Feb 19, 2020 at 10:00:02PM +0000, MLH wrote:
> > The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
> >
> > From: mlh@goathill.org (MLH)
> > To: gnats-bugs@netbsd.org
> > Cc: mlh@goathill.org
> > Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> > Date: Wed, 19 Feb 2020 16:56:27 -0500 (EST)
> >
> > gnats-admin@netbsd.org wrote:
> > > Thank you very much for your problem report.
> > > It has the internal identification `port-amd64/54988'.
> > > The individual assigned to look at your
> > > report is: port-amd64-maintainer.
> > >
> > > >Category: port-amd64
> > > >Responsible: port-amd64-maintainer
> > > >Synopsis: system freezes shortly after physical memory is exhausted.
> > > >Arrival-Date: Wed Feb 19 18:30:00 +0000 2020
> >
> > I noticed that the main consumers appear to be:
> >
> > $ vmstat -m
> > Memory resource pool statistics
> > Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
> > ...
> > ataspl 160 1264935 0 1264935 3 0 3 3 0 inf 3
> > ...
> > execargs 262144 1775776 0 1775776 8 0 8 8 0 16 8
> > ...
> > wapblinopl 40 2723208 0 2721719 15 0 15 15 0 inf 0
>
> If requests ~= releases, it is no memory leak.
Any idea on what is preventing the memory from being released and
why it just locks up?
From: Joerg Sonnenberger <joerg@bec.de>
To: gnats-bugs@netbsd.org
Cc: port-amd64-maintainer@netbsd.org, gnats-admin@netbsd.org,
netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Thu, 20 Feb 2020 08:59:14 +0100
On Thu, Feb 20, 2020 at 04:15:01AM +0000, MLH wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: mlh@goathill.org (MLH)
> To: Joerg Sonnenberger <joerg@bec.de>
> Cc: gnats-bugs@netbsd.org, port-amd64-maintainer@netbsd.org,
> gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, mlh@goathill.org
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Wed, 19 Feb 2020 23:13:41 -0500 (EST)
>
> Joerg Sonnenberger wrote:
> > On Wed, Feb 19, 2020 at 10:00:02PM +0000, MLH wrote:
> > > The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
> > >
> > > From: mlh@goathill.org (MLH)
> > > To: gnats-bugs@netbsd.org
> > > Cc: mlh@goathill.org
> > > Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> > > Date: Wed, 19 Feb 2020 16:56:27 -0500 (EST)
> > >
> > > gnats-admin@netbsd.org wrote:
> > > > Thank you very much for your problem report.
> > > > It has the internal identification `port-amd64/54988'.
> > > > The individual assigned to look at your
> > > > report is: port-amd64-maintainer.
> > > >
> > > > >Category: port-amd64
> > > > >Responsible: port-amd64-maintainer
> > > > >Synopsis: system freezes shortly after physical memory is exhausted.
> > > > >Arrival-Date: Wed Feb 19 18:30:00 +0000 2020
> > >
> > > I noticed that the main consumers appear to be:
> > >
> > > $ vmstat -m
> > > Memory resource pool statistics
> > > Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
> > > ...
> > > ataspl 160 1264935 0 1264935 3 0 3 3 0 inf 3
> > > ...
> > > execargs 262144 1775776 0 1775776 8 0 8 8 0 16 8
> > > ...
> > > wapblinopl 40 2723208 0 2721719 15 0 15 15 0 inf 0
> >
> > If requests ~= releases, it is no memory leak.
>
> Any idea on what is preventing the memory from being released and
> why it just locks up?
I don't know, I'm just saying that you are looking at the wrong pools.
Look for those with high Npage, those are actually big.
Joerg
From: mlh@goathill.org (MLH)
To: Joerg Sonnenberger <joerg@bec.de>
Cc: gnats-bugs@netbsd.org, port-amd64-maintainer@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Fri, 21 Feb 2020 08:09:59 -0500 (EST)
Joerg Sonnenberger wrote:
> On Thu, Feb 20, 2020 at 04:15:01AM +0000, MLH wrote:
> >
> > Any idea on what is preventing the memory from being released and
> > why it just locks up?
>
> I don't know, I'm just saying that you are looking at the wrong pools.
> Look for those with high Npage, those are actually big.
The problem continues to appear to be a filesystem-related issue
as I can do compute-bound jobs with no problem. Physical memory is
recovered as it normally does with no issue. Large or intensive
filesystem writes appear to cause the system to seize and it doesn't
even appear to require physical memory to be exhausted as it just
seized twice with vmstat showing over 2G of available physical
memory, and physical memory isn't showing to be recovered after
intensive fs writes when I stop it before the system seizes.
Have any filesystem-related changes been done recently?
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: port-amd64-maintainer@netbsd.org, gnats-admin@netbsd.org,
netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sat, 22 Feb 2020 10:40:11 -0500 (EST)
MLH wrote:
> From: mlh@goathill.org (MLH)
>
> Joerg Sonnenberger wrote:
> > On Thu, Feb 20, 2020 at 04:15:01AM +0000, MLH wrote:
> > >
> > > Any idea on what is preventing the memory from being released and
> > > why it just locks up?
> >
> > I don't know, I'm just saying that you are looking at the wrong pools.
> > Look for those with high Npage, those are actually big.
>
> The problem continues to appear to be a filesystem-related issue
> as I can do compute-bound jobs with no problem. Physical memory is
> recovered as it normally does with no issue. Large or intensive
> filesystem writes appear to cause the system to seize and it doesn't
> even appear to require physical memory to be exhausted as it just
> seized twice with vmstat showing over 2G of available physical
> memory, and physical memory isn't showing to be recovered after
> intensive fs writes when I stop it before the system seizes.
>
> Have any filesystem-related changes been done recently?
Such as (from CHANGES) :
uvm: More precisely track clean/dirty pages, and change how they are
indexed, speeding up fsync() on large files by orders of
magnitude. Original work done by yamt@. [ad 20200115]
as all was fine just before this change and then with kernels from
the last of Jan on, I am seeing the problems.
Responsible-Changed-From-To: port-amd64-maintainer->ad
Responsible-Changed-By: ad@NetBSD.org
Responsible-Changed-When: Sat, 22 Feb 2020 18:19:31 +0000
Responsible-Changed-Why:
will take a look
From: Andrew Doran <ad@netbsd.org>
To: MLH <mlh@goathill.org>
Cc: gnats-bugs@netbsd.org, port-amd64-maintainer@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sat, 22 Feb 2020 18:17:24 +0000
Very interesting, seems like there are a couple of low memory scenarios
where the system is hanging up. The last time I did a stress test on this
scenario was early January and everything was fine. I'll try another.
Andrew
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sat, 22 Feb 2020 13:39:58 -0500 (EST)
Andrew Doran wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: Andrew Doran <ad@netbsd.org>
>
> Very interesting, seems like there are a couple of low memory scenarios
> where the system is hanging up. The last time I did a stress test on this
> scenario was early January and everything was fine. I'll try another.
>
> Andrew
Yes, please test. A kernel from about the second week in January
worked fine for me also and that was the last one which did fine.
I can trigger a freeze in all sorts of ways with 9.99.4[3-6] all
of which appear to involve either large files or low physical
memory.
From: Andrew Doran <ad@netbsd.org>
To: MLH <mlh@goathill.org>
Cc: gnats-bugs@netbsd.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sun, 23 Feb 2020 21:55:48 +0000
Stress testing under low memory conditions I'm not able to reproduce this,
however the machine is headless. I'll try more I/O intensive stuff when I
get a chance.
If you are still able to run vmstat, then the output of "vmstat -s" say
every 10 seconds over the course of a minute would be very useful. If you
have "top -t" running, it would be interesting to observe if pgdaemon is
consuming a lot of CPU time.
Andrew
From: Andrew Doran <ad@netbsd.org>
To: gnats-bugs@netbsd.org
Cc:
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sun, 23 Feb 2020 22:02:08 +0000
What is the file system / disk configuration on this machine? Are you
running any processes that consume a lot of wired memory by chance?
Andrew
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sun, 23 Feb 2020 17:30:23 -0500 (EST)
Andrew Doran wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> What is the file system / disk configuration on this machine? Are you
> running any processes that consume a lot of wired memory by chance?
$ df
Filesystem 1K-blocks Used Avail %Cap Mounted on
/dev/wd0a 71714288 38691140 29437434 56% /
/dev/wd1f 946611748 285748620 613532544 31% /opt
kernfs 1 1 0 100% /kern
ptyfs 1 1 0 100% /dev/pts
procfs 4 4 0 100% /proc
tmpfs 1044332 0 1044332 0% /var/shm
Result is the same without /opt
/ is a completely new installation of NetBSD 9.99.46 GENERIC with
HDMI audio compile option with my previous /etc environment merged
with it but works the same with a GENERIC kernel. I repartitioned
the drive and reformatted wd0a before installing.
[ 5.259157] wd0 at atabus2 drive 0
[ 5.259157] wd0: <ST380011A>
[ 5.259157] wd0: drive supports 16-sector PIO transfers, LBA48 addressing
[ 5.259157] wd0: 76319 MB, 155061 cyl, 16 head, 63 sec, 512 bytes/sect x 156301488 sectors
[ 5.319178] wd0: 32-bit data port
[ 5.319178] wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
[ 5.319178] wd0(jmide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA)
[ 5.319178] wd1 at atabus3 drive 0
[ 5.319178] wd1: <ST1000DM003-1ER162>
[ 5.319178] wd1: drive supports 16-sector PIO transfers, LBA48 addressing
[ 5.319178] wd1: 931 GB, 1938021 cyl, 16 head, 63 sec, 512 bytes/sect x 1953525168 sectors
[ 5.379200] wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133), WRITE DMA FUA, NCQ (32 tags)
[ 5.379200] wd1(ahcisata1:0:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA), NCQ (31 tags)
I also can have wd2/3 in a raid configuration mounted but the
behavior is similar without it except that when mounted, the system
runs out of phymem more quickly
[ 5.379200] wd2 at atabus5 drive 0
[ 5.379200] wd2: <ST4000DM004-2CV104>
[ 5.379200] wd2: drive supports 16-sector PIO transfers, LBA48 addressing
[ 5.379200] wd2: 3726 GB, 7752021 cyl, 16 head, 63 sec, 512 bytes/sect x 7814037168 sectors
[ 5.399207] wd2: GPT GUID: f10e8bb0-de35-4b54-b365-6571da54d265
[ 5.399207] dk0 at wd2: "boot0", 100000 blocks at 34, type: ffs
[ 5.399207] dk1 at wd2: "swap0", 10000000 blocks at 100034, type: swap
[ 5.399207] dk2 at wd2: "raidw0", 7803937101 blocks at 10100034, type: raidframe
[ 5.409211] wd2: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133), WRITE DMA FUA, NCQ (32 tags)
[ 5.409211] wd2(ahcisata1:4:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA), NCQ (31 tags)
[ 5.409211] wd3 at atabus6 drive 0
[ 5.409211] wd3: <ST4000DM004-2CV104>
[ 5.409211] wd3: drive supports 16-sector PIO transfers, LBA48 addressing
[ 5.409211] wd3: 3726 GB, 7752021 cyl, 16 head, 63 sec, 512 bytes/sect x 7814037168 sectors
[ 5.429218] wd3: GPT GUID: b3d4301a-cbfe-414e-ac61-16571d17d548
[ 5.429218] dk3 at wd3: "boot1", 100000 blocks at 34, type: ffs
[ 5.429218] dk4 at wd3: "swap1", 10000000 blocks at 100034, type: swap
[ 5.429218] dk5 at wd3: "raidw1", 7803937101 blocks at 10100034, type: raidframe
I also get similar results when booting off of dk6(raid0) without
wd0/1 mounted. The raid0 has the same install environment as wd0a
but it also failed the same way right after it was pristeenly
installed, without any of my older environment and with a GENERIC
kernel.
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Thu, 27 Feb 2020 20:01:36 -0500 (EST)
Andrew Doran wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: Andrew Doran <ad@netbsd.org>
> To: MLH <mlh@goathill.org>
> Cc: gnats-bugs@netbsd.org
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Sun, 23 Feb 2020 21:55:48 +0000
>
> Stress testing under low memory conditions I'm not able to reproduce this,
> however the machine is headless. I'll try more I/O intensive stuff when I
> get a chance.
>
> If you are still able to run vmstat, then the output of "vmstat -s" say
> every 10 seconds over the course of a minute would be very useful. If you
> have "top -t" running, it would be interesting to observe if pgdaemon is
> consuming a lot of CPU time.
Unfortunately I saw there were updates to uvm so rebuilt everything
and now it crashes towards the end of X starting up, right after
Initializing extension GLX.
[ 57.167] (II) Initializing extension X-Resource
[ 57.167] (II) Initializing extension XVideo
[ 57.168] (II) Initializing extension XVideo-MotionCompensation
[ 57.168] (II) Initializing extension GLX
crashes here
(gdb) target kvm netbsd.1.core
_kvm_kvatop(0)
(gdb) bt
_kvm_kvatop(0)
So I am having to boot an earlier kernel off of a thumbdrive to
bring the box up from NetBSD 9.99.41 Thu Jan 23 13:54:41 EST 2020
Still has the freeze that looks like a uvm issue.
This is what normally follows Initializing extension GLX :
[ 75.828] (II) AIGLX: Loaded and initialized r600
[ 75.828] (II) GLX: Initialized DRI2 GL provider for screen 0
[ 75.828] (II) Initializing extension XFree86-VidModeExtension
[ 75.829] (II) Initializing extension XFree86-DGA
[ 75.829] (II) Initializing extension XFree86-DRI
[ 75.829] (II) Initializing extension DRI2
[ 75.881] (II) RADEON(0): Setting screen physical size to 1016 x 317
[ 75.881] (II) RADEON(0): Allocate new frame buffer 3840x1200
[ 75.926] (II) RADEON(0): VRAM usage limit set to 1858971K
[ 78.235] (II) Using input driver 'mouse' for 'Mouse0'
[ 78.235] (**) Option "CorePointer"
[ 78.235] (**) Mouse0: always reports core events
[ 78.277] (**) Option "Protocol" "wsmouse"
[ 78.277] (**) Option "Device" "/dev/wsmouse"
[ 78.277] (**) Mouse0: Protocol: "wsmouse"
[ 78.277] (**) Mouse0: always reports core events
[ 78.278] (==) Mouse0: Emulate3Buttons, Emulate3Timeout: 50
[ 78.278] (**) Option "ZAxisMapping" "4 5 6 7"
[ 78.278] (**) Mouse0: ZAxisMapping: buttons 4, 5, 6 and 7
[ 78.278] (**) Mouse0: Buttons: 11
[ 78.278] (II) XINPUT: Adding extended input device "Mouse0" (type: MOUSE, id
[ 78.278] (**) Mouse0: (accel) keeping acceleration scheme 1
[ 78.278] (**) Mouse0: (accel) acceleration profile 0
[ 78.278] (**) Mouse0: (accel) acceleration factor: 2.000
[ 78.278] (**) Mouse0: (accel) acceleration threshold: 4
[ 78.278] (II) Using input driver 'kbd' for 'Keyboard0'
[ 78.278] (**) Option "CoreKeyboard"
[ 78.278] (**) Keyboard0: always reports core events
[ 78.278] (**) Keyboard0: always reports core events
[ 78.278] (**) Option "Protocol" "standard"
[ 78.278] (**) Option "XkbRules" "base"
[ 78.278] (**) Option "XkbModel" "pc105"
[ 78.279] (**) Option "XkbLayout" "us"
[ 78.279] (**) Option "XkbOptions" "ctrl:swapcaps"
[ 78.279] (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOA
[ 92.114] (II) RADEON(0): Allocate new frame buffer 1920x1200
[ 92.115] (II) RADEON(0): VRAM usage limit set to 1867071K
[ 92.154] (II) RADEON(0): Allocate new frame buffer 3840x1200
[ 92.159] (II) RADEON(0): VRAM usage limit set to 1858971K
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: port-amd64-maintainer@netbsd.org, gnats-admin@netbsd.org,
netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Wed, 4 Mar 2020 16:30:04 -0500 (EST)
MLH wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> Any idea on what is preventing the memory from being released and
> why it just locks up?
With the changes to -current as of NetBSD 9.99.48 Sun Mar 1 17:38:46
EST 2020, the complete lockups I was seeing when the 4G of phymem
was exhausted as well as the other issues seems to have been somewhat
resolved.
The largest improvements were last week when the big uvm changes
were checked in but I haven't seen a lockup since the last couple
of changes made on Sat or Sun. Those appear to have finally made
this system stable enough to use again, though there still appear
to be memory alloc/dealloc and swap issues because I have to reboot
at least once a day when phymem is exhausted and the system becomes
unusable due to severe swapping. This is when basically nothing is
still running and disc cache is only taking up a tiny bit of phymem
and only a tiny bit of swap appears to be in use.
From: Andrew Doran <ad@netbsd.org>
To: MLH <mlh@goathill.org>
Cc: gnats-bugs@netbsd.org, port-amd64-maintainer@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Wed, 4 Mar 2020 21:58:13 +0000
On Wed, Mar 04, 2020 at 04:30:04PM -0500, MLH wrote:
> MLH wrote:
> > The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
> >
> > Any idea on what is preventing the memory from being released and
> > why it just locks up?
>
> With the changes to -current as of NetBSD 9.99.48 Sun Mar 1 17:38:46
> EST 2020, the complete lockups I was seeing when the 4G of phymem
> was exhausted as well as the other issues seems to have been somewhat
> resolved.
>
> The largest improvements were last week when the big uvm changes
> were checked in but I haven't seen a lockup since the last couple
> of changes made on Sat or Sun. Those appear to have finally made
> this system stable enough to use again, though there still appear
> to be memory alloc/dealloc and swap issues because I have to reboot
> at least once a day when phymem is exhausted and the system becomes
> unusable due to severe swapping. This is when basically nothing is
> still running and disc cache is only taking up a tiny bit of phymem
> and only a tiny bit of swap appears to be in use.
Thanks for reporting. Not to put a downer on things, but the UVM jumbo
commit was largely a mechnical one to change lock types. Any corrective
effect is by chance. Still investigation to do here.
Cheers,
Andrew
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: port-amd64-maintainer@netbsd.org,
gnats-admin@netbsd.goathill.org, port-amd64-maintainer@netbsd.org,
netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 10 Mar 2020 12:54:39 -0400 (EDT)
mlh wrote:
> MLH wrote:
> > The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
> >
> > Any idea on what is preventing the memory from being released and
> > why it just locks up?
...
> though there still appear
> to be memory alloc/dealloc and swap issues because I have to reboot
> at least once a day when phymem is exhausted and the system becomes
> unusable due to severe swapping. This is when basically nothing is
> still running and disc cache is only taking up a tiny bit of phymem
> and only a tiny bit of swap appears to be in use.
Example - reboot, run one console without X, cvs up -current and
build a -current distribution. Results in basically a nonworking
system since there is only a few tens of Mbytes of ram available
for the system to run in. Less than 25 basic processes running
after the build and none of them are really active.
NetBSD 9.99.48 Sat Mar 7 11:19:02 EST 2020 amd64
[ 1.000000] total memory = 4079 MB
[ 1.000000] avail memory = 3933 MB
[ 1.000000] sysctl_createv: sysctl_locate(maxtypenum) returned 2
[ 1.000000] pool redzone disabled for 'buf4k'
[ 1.000000] pool redzone disabled for 'buf64k'
[ 1.000000] timecounter: Timecounters tick every 10.000 msec
[ 1.000000] Kernelized RAIDframe activated
[ 1.000000] running cgd selftest aes-xts-256 aes-xts-512 done
[ 1.000000] timecounter: Timecounter "i8254" frequency 1193182 Hz quality 100
[ 1.000003] Gigabyte Technology Co., Ltd. H61M-S2-B3 ( )
[ 1.000003] mainbus0 (root)
[ 1.000003] ACPI: RSDP 0x00000000000F6EA0 000014 (v00 GBT )
[ 1.000003] ACPI: RSDT 0x00000000DF7D3040 00004C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: FACP 0x00000000DF7D3100 000074 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: DSDT 0x00000000DF7D31C0 0049F2 (v01 GBT GBTUACPI 00001000 MSFT 04000000)
[ 1.000003] ACPI: FACS 0x00000000DF7D0000 000040
[ 1.000003] ACPI: MSDM 0x00000000DF7D7D00 000055 (v03 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: HPET 0x00000000DF7D7DC0 000038 (v01 GBT GBTUACPI 42302E31 GBTU 00000098)
[ 1.000003] ACPI: MCFG 0x00000000DF7D7E40 00003C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: ASPT 0x00000000DF7D7F00 000034 (v07 GBT PerfTune 312E3042 UTBG 01010101)
[ 1.000003] ACPI: SSPT 0x00000000DF7D7F40 002270 (v01 GBT SsptHead 312E3042 UTBG 01010101)
[ 1.000003] ACPI: EUDS 0x00000000DF7DA1B0 0000C0 (v01 GBT 00000000 00000000)
[ 1.000003] ACPI: TAMG 0x00000000DF7DA270 000382 (v01 GBT GBT B0 5455312E BG?? 45240101)
[ 1.000003] ACPI: APIC 0x00000000DF7D7C00 0000BC (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: SSDT 0x00000000DF7DA600 001EC8 (v01 INTEL PPM RCM 80000001 INTL 20061109)
[ 1.000003] ACPI: 2 ACPI AML tables successfully acquired and loaded
[ 1.000003] ioapic0 at mainbus0 apid 2: pa 0xfec00000, version 0x20, 24 pins
[ 1.000003] cpu0 at mainbus0 apid 0
[ 1.000003] cpu0: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu0: node 0, package 0, core 0, smt 0
[ 1.000003] cpu1 at mainbus0 apid 2
[ 1.000003] cpu1: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu1: node 0, package 0, core 1, smt 0
[ 1.000003] cpu2 at mainbus0 apid 1
[ 1.000003] cpu2: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu2: node 0, package 0, core 0, smt 1
[ 1.000003] cpu3 at mainbus0 apid 3
[ 1.000003] cpu3: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu3: node 0, package 0, core 1, smt 1
$ vmstat
procs memory page disks faults cpu
r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
0 0 89556 3809396 1605 0 0 0 0 0 68 37 146 3059 360 0 1 99
$ cd /usr/xsrc
$ cvs up
...
$ cd /usr/src
$ cvs up
...
$ /usr/src/build.sh -j2 -x -r -U -u -m amd64 -D /opt/obj/amd64/build -M /opt/obj/amd64 -T /opt/obj/amd64/tools -R /opt/build/9 tools release install-image
...
finish - taking almost 6 hours, about twice as long as typical
likely due to thrashing
$ vmstat
procs memory page disks faults cpu
r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
1 0 535932 17812 16003 7 4 12 330 375 7 46 255 15441 1525 13 4 84
So the system is down to less than 20M of phymem and never recovers
much more than maybe 10M of it. Have to reboot to reclaim a workable
system, so I have to reboot 2-3 times a day to do much of any work.
From: Lars Reichardt <lars@paradoxon.info>
To: MLH <mlh@goathill.org>, gnats-bugs@netbsd.org
Cc: port-amd64-maintainer@netbsd.org, gnats-admin@netbsd.goathill.org,
netbsd-bugs@netbsd.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 10 Mar 2020 18:29:43 +0100
On 10.03.2020 17:54, MLH wrote:
> mlh wrote:
>> MLH wrote:
>>> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>>>
>>> Any idea on what is preventing the memory from being released and
>>> why it just locks up?
> ...
>> though there still appear
>> to be memory alloc/dealloc and swap issues because I have to reboot
>> at least once a day when phymem is exhausted and the system becomes
>> unusable due to severe swapping. This is when basically nothing is
>> still running and disc cache is only taking up a tiny bit of phymem
>> and only a tiny bit of swap appears to be in use.
> Example - reboot, run one console without X, cvs up -current and
> build a -current distribution. Results in basically a nonworking
> system since there is only a few tens of Mbytes of ram available
> for the system to run in. Less than 25 basic processes running
> after the build and none of them are really active.
>
> NetBSD 9.99.48 Sat Mar 7 11:19:02 EST 2020 amd64
>
> [ 1.000000] total memory = 4079 MB
> [ 1.000000] avail memory = 3933 MB
> [ 1.000000] sysctl_createv: sysctl_locate(maxtypenum) returned 2
> [ 1.000000] pool redzone disabled for 'buf4k'
> [ 1.000000] pool redzone disabled for 'buf64k'
> [ 1.000000] timecounter: Timecounters tick every 10.000 msec
> [ 1.000000] Kernelized RAIDframe activated
> [ 1.000000] running cgd selftest aes-xts-256 aes-xts-512 done
> [ 1.000000] timecounter: Timecounter "i8254" frequency 1193182 Hz quality 100
> [ 1.000003] Gigabyte Technology Co., Ltd. H61M-S2-B3 ( )
> [ 1.000003] mainbus0 (root)
> [ 1.000003] ACPI: RSDP 0x00000000000F6EA0 000014 (v00 GBT )
> [ 1.000003] ACPI: RSDT 0x00000000DF7D3040 00004C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
> [ 1.000003] ACPI: FACP 0x00000000DF7D3100 000074 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
> [ 1.000003] ACPI: DSDT 0x00000000DF7D31C0 0049F2 (v01 GBT GBTUACPI 00001000 MSFT 04000000)
> [ 1.000003] ACPI: FACS 0x00000000DF7D0000 000040
> [ 1.000003] ACPI: MSDM 0x00000000DF7D7D00 000055 (v03 GBT GBTUACPI 42302E31 GBTU 01010101)
> [ 1.000003] ACPI: HPET 0x00000000DF7D7DC0 000038 (v01 GBT GBTUACPI 42302E31 GBTU 00000098)
> [ 1.000003] ACPI: MCFG 0x00000000DF7D7E40 00003C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
> [ 1.000003] ACPI: ASPT 0x00000000DF7D7F00 000034 (v07 GBT PerfTune 312E3042 UTBG 01010101)
> [ 1.000003] ACPI: SSPT 0x00000000DF7D7F40 002270 (v01 GBT SsptHead 312E3042 UTBG 01010101)
> [ 1.000003] ACPI: EUDS 0x00000000DF7DA1B0 0000C0 (v01 GBT 00000000 00000000)
> [ 1.000003] ACPI: TAMG 0x00000000DF7DA270 000382 (v01 GBT GBT B0 5455312E BG?? 45240101)
> [ 1.000003] ACPI: APIC 0x00000000DF7D7C00 0000BC (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
> [ 1.000003] ACPI: SSDT 0x00000000DF7DA600 001EC8 (v01 INTEL PPM RCM 80000001 INTL 20061109)
> [ 1.000003] ACPI: 2 ACPI AML tables successfully acquired and loaded
> [ 1.000003] ioapic0 at mainbus0 apid 2: pa 0xfec00000, version 0x20, 24 pins
> [ 1.000003] cpu0 at mainbus0 apid 0
> [ 1.000003] cpu0: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
> [ 1.000003] cpu0: node 0, package 0, core 0, smt 0
> [ 1.000003] cpu1 at mainbus0 apid 2
> [ 1.000003] cpu1: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
> [ 1.000003] cpu1: node 0, package 0, core 1, smt 0
> [ 1.000003] cpu2 at mainbus0 apid 1
> [ 1.000003] cpu2: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
> [ 1.000003] cpu2: node 0, package 0, core 0, smt 1
> [ 1.000003] cpu3 at mainbus0 apid 3
> [ 1.000003] cpu3: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
> [ 1.000003] cpu3: node 0, package 0, core 1, smt 1
>
> $ vmstat
> procs memory page disks faults cpu
> r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
> 0 0 89556 3809396 1605 0 0 0 0 0 68 37 146 3059 360 0 1 99
> $ cd /usr/xsrc
> $ cvs up
> ...
> $ cd /usr/src
> $ cvs up
> ...
> $ /usr/src/build.sh -j2 -x -r -U -u -m amd64 -D /opt/obj/amd64/build -M /opt/obj/amd64 -T /opt/obj/amd64/tools -R /opt/build/9 tools release install-image
> ...
>
> finish - taking almost 6 hours, about twice as long as typical
> likely due to thrashing
>
> $ vmstat
> procs memory page disks faults cpu
> r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
> 1 0 535932 17812 16003 7 4 12 330 375 7 46 255 15441 1525 13 4 84
>
> So the system is down to less than 20M of phymem and never recovers
> much more than maybe 10M of it. Have to reboot to reclaim a workable
> system, so I have to reboot 2-3 times a day to do much of any work.
>
what does vmstat -mvW show when memory gets low the whole output?
I'm interested especially in the kva pools.
Lars
--
You will continue to suffer
if you have an emotional reaction to everything that is said to you.
True power is sitting back and observing everything with logic.
If words control you that means everyone else can control you.
Breathe and allow things to pass.
--- Bruce Lee
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Fri, 20 Mar 2020 19:07:09 -0400 (EDT)
Andrew Doran wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: Andrew Doran <ad@netbsd.org>
> To: gnats-bugs@netbsd.org
> Cc:
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Sun, 23 Feb 2020 22:02:08 +0000
>
> What is the file system / disk configuration on this machine? Are you
> running any processes that consume a lot of wired memory by chance?
>
> Andrew
With the changes made yesterday as referenced by Andrew in:
Re: Another pmap panic:
> I suggest updaing to the latest, delivered yesterday, which has
> fixes for every problem I have encountered or seen mentioned
> including this one, and survives low memory stress testing for me:
I think this can be closed as it appears to have fixed these issues.
Thank You Andrew
From: mlh@goathill.org (MLH)
To: mlh@goathill.org
Cc: gnats-bugs@netbsd.org, ad@netbsd.org, gnats-admin@netbsd.org,
netbsd-bugs@netbsd.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Mon, 23 Mar 2020 10:06:12 -0400 (EDT)
mlh wrote:
> Andrew Doran wrote:
> > The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
> >
> > From: Andrew Doran <ad@netbsd.org>
> > To: gnats-bugs@netbsd.org
> > Cc:
> > Subject: Re: port-amd64/54988: possible memory leaks/swap problems
...
> With the changes made yesterday as referenced by Andrew in:
> Re: Another pmap panic:
>
> > I suggest updaing to the latest, delivered yesterday, which has
> > fixes for every problem I have encountered or seen mentioned
> > including this one, and survives low memory stress testing for me:
>
> I think this can be closed as it appears to have fixed these issues.
I take that back. While the status is much better, there appears
to still be a big memory leak as of yesterday.
I booted NetBSD 9.99.51 : Sun Mar 22 22:42:31 and did a distribution
build of -current. 4G of phymem, started the build with over 3.5G
available, the build did finish without crashing but only about 6M
of phymem is left available.
r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
1 1 157916 6052 5540 103 5 6 362 600 6 127 660 10541 2276 3 2 96
The box is barely usable running X and takes about 6 seconds to
just switch focus to another window and about 15 seconds to launch
another xterm. kmem-00192 still looks suspicious.
Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
kmem-00192 256 12942910 2 4093 808677 0 808677 808677 0 inf 0
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 24 Mar 2020 08:21:41 -0400 (EDT)
Lars Reichardt wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: Lars Reichardt <lars@paradoxon.info>
> To: MLH <mlh@goathill.org>, gnats-bugs@netbsd.org
> Cc: port-amd64-maintainer@netbsd.org, gnats-admin@netbsd.goathill.org,
> netbsd-bugs@netbsd.org
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Tue, 10 Mar 2020 18:29:43 +0100
>
> On 10.03.2020 17:54, MLH wrote:
> > mlh wrote:
> >> MLH wrote:
> >>> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
> >>>
> >>> Any idea on what is preventing the memory from being released and
> >>> why it just locks up?
> > ...
> >> though there still appear
> >> to be memory alloc/dealloc and swap issues because I have to reboot
> >> at least once a day when phymem is exhausted and the system becomes
> >> unusable due to severe swapping. This is when basically nothing is
> >> still running and disc cache is only taking up a tiny bit of phymem
> >> and only a tiny bit of swap appears to be in use.
...
> > So the system is down to less than 20M of phymem and never recovers
> > much more than maybe 10M of it. Have to reboot to reclaim a workable
> > system, so I have to reboot 2-3 times a day to do much of any work.
> >
> what does vmstat -mvW show when memory gets low the whole output?
>
> I'm interested especially in the kva pools.
>
> Lars
Ok. I will check that next time. As of NetBSD 9.99.51 Sun Mar 22,
the system just grinds to a halt when it runs out of phymem. Still
seems that almost no swap is being used but at least it doesn't
crash. Yesterday it got down to less than 1M of phymem and took me
about an hour to log in from my phone over wifi in order to do a
clean shutdown as I couldn't get a console up. I had managed to
shut X down but was getting out of memory errors. Even on shutdown
many things couldn't cleanly shut down due to out of memory errors.
I still suspect something in radeondrmkms is leaking memory.
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 24 Mar 2020 17:16:30 -0400 (EDT)
MLH wrote:
> > > So the system is down to less than 20M of phymem and never recovers
> > > much more than maybe 10M of it. Have to reboot to reclaim a workable
> > > system, so I have to reboot 2-3 times a day to do much of any work.
> > >
> > what does vmstat -mvW show when memory gets low the whole output?
> >
> > I'm interested especially in the kva pools.
> >
> > Lars
>
> Ok. I will check that next time. As of NetBSD 9.99.51 Sun Mar 22,
Two operations, build a kernel then build a -current distribution
but didn't finish the second. It ground to a halt so I stopped it.
Took three sets of vmstats before building the kernel, before
building the distribution, and after stopping it and rebooting.
Before building the kernel
$ vmstat
procs memory page disks faults cpu
r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
2 1 823440 587248 348 0 0 0 34 46 2 20 127 2511 574 0 0 100
$ vmstat -mvW
Memory resource pool statistics
Name Size Requests Fail Releases InUse Avail Pgreq Pgrel Npage PageSz Hiwat Minpg Maxpg Idle Flags Util
ah_tdb_crypto 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
aio_jobs_pool 136 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
aio_lio_pool 48 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
amappl 88 10306 0 6799 3507 4278 175 2 173 4096 175 0 inf 0 0x10040 43.6%
anonpl 40 246829 0 193244 53585 104985 1777 207 1570 4096 1777 0 inf 1 0x01040 33.3%
ataspl 160 2501711 0 2501711 0 50 4 2 2 4096 3 0 inf 2 0x10040 0.0%
biopl 304 29139 0 28899 240 7 1558 1539 19 4096 1558 0 inf 0 0x10040 93.8%
brtpl 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
buf16k 16392 11283 0 10529 754 2 3208 2956 252 65536 1505 1 1 0 0x10040 74.8%
buf1k 1032 59 0 59 0 3 19 18 1 4096 19 1 1 1 0x10040 0.0%
buf2k 2056 36539 0 35314 1225 1 36540 35314 1226 4096 17763 1 1 1 0x10040 50.2%
buf32k 32776 38146 0 28418 9728 1 38147 28418 9729 65536 9729 1 1 1 0x10040 50.0%
buf4k 4096 137128 0 62989 74139 1 137129 62989 74140 4096 88754 1 1 1 0x10000 100.0%
buf512b 520 1186 0 1185 1 6 138 137 1 4096 137 1 1 0 0x10040 12.7%
buf64k 65536 240 0 232 8 1 241 232 9 65536 241 1 1 1 0x10000 88.9%
buf8k 8200 1947 0 1483 464 5 170 103 67 65536 80 1 1 0 0x10040 86.7%
bufpl 304 95792 0 7819 87973 7096 7369 56 7313 4096 7369 0 inf 53 0x10040 89.3%
carp 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
carp6 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
ccdbuf 336 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cd9660nopl 208 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptdesc 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptkop 384 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptop 320 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
csepl 232 0 0 0 0 323 19 0 19 4096 19 19 inf 19 0x10040 0.0%
cwdi 64 247 0 157 90 99 3 0 3 4096 3 0 inf 0 0x10040 46.9%
dbregs 144 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
dirhepl 48 0 0 0 0 0 0 0 0 4096 0 0 313 0 0x00040 ---
dirhpl 304 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
efsinopl 248 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ehcixfer 384 7 0 3 4 6 1 0 1 4096 1 0 inf 0 0x10040 37.5%
ehcixfer 384 18 0 14 4 6 2 1 1 4096 2 0 inf 0 0x10040 37.5%
esp_tdb_crypto 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
execargs 262144 2335 0 2335 0 2 8 6 2 262144 4 0 16 2 0x10c00 0.0%
ext2fsinopl 256 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
extent 48 9 0 8 1 83 1 0 1 4096 1 0 inf 0 0x00040 1.2%
fcrpl 184 31 0 30 1 83 4 0 4 4096 4 4 inf 3 0x10040 1.1%
fdfile 64 3261 0 1048 2213 496 43 0 43 4096 43 0 inf 0 0x11040 80.4%
ffsdino1 136 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ffsdino2 264 251731 0 42894 208837 38 14321 396 13925 4096 14320 0 inf 2 0x10040 96.7%
ffsino 256 251571 0 42734 208837 43 13427 372 13055 4096 13425 0 inf 2 0x10000 100.0%
file 128 2201 0 624 1577 345 64 2 62 4096 64 0 inf 0 0x10040 79.5%
filedesc 832 243 0 153 90 38 38 6 32 4096 38 0 inf 0 0x10040 57.1%
icmp 32 2 0 2 0 126 1 0 1 4096 1 0 inf 1 0x00040 0.0%
icmp6 32 6 0 6 0 0 2 2 0 4096 1 0 inf 0 0x00040 ---
igmppl 40 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
in6pcbpl 280 3247 0 3234 13 1 3 2 1 4096 3 0 inf 0 0x10040 88.9%
inmltpl 56 2 0 0 2 70 1 0 1 4096 1 0 inf 0 0x00040 2.7%
inpcbpl 240 3215 0 3192 23 9 9 7 2 4096 9 0 inf 0 0x10040 67.4%
ipcomp_tdb_cryp 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ipfrenpl 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
kcpuset 64 377 0 175 202 176 6 0 6 4096 6 0 inf 0 0x10040 52.6%
kcredpl 192 573 0 177 396 129 25 0 25 4096 25 0 inf 0 0x10040 74.2%
kmem-00008 8 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10400 ---
kmem-00016 16 10441 0 6306 4135 5849 40 1 39 4096 40 0 inf 0 0x10400 41.4%
kmem-00032 32 8014 0 4258 3756 3924 60 0 60 4096 60 0 inf 0 0x10400 48.9%
kmem-00064 128 12205 0 7821 4384 2343 377 160 217 4096 377 0 inf 1 0x10040 63.1%
kmem-00128 192 9384 0 2341 7043 895 379 1 378 4096 379 0 inf 0 0x10040 87.3%
kmem-00192 256 4088775 0 182 4088593 15 255538 0 255538 4096 255538 0 inf 0 0x10000 100.0%
kmem-00256 320 958 0 423 535 209 70 8 62 4096 70 0 inf 0 0x10040 67.4%
kmem-00320 384 1160 0 346 814 206 107 5 102 4096 107 0 inf 0 0x10040 74.8%
kmem-00384 448 519 0 134 385 65 50 0 50 4096 50 0 inf 0 0x10040 84.2%
kmem-00448 512 369 0 149 220 68 39 3 36 4096 39 0 inf 0 0x10000 76.4%
kmem-00512 576 279 0 119 160 50 31 1 30 4096 31 0 inf 0 0x10040 75.0%
kmem-00768 832 1034 0 367 667 121 240 43 197 4096 240 0 inf 0 0x10040 68.8%
kmem-01024 1088 2493 0 1499 994 233 700 291 409 4096 700 0 inf 0 0x10040 64.6%
kmem-02048 2112 1202 0 774 428 0 1192 764 428 4096 1129 0 inf 0 0x10040 51.6%
kmem-04096 4096 256 0 102 154 0 250 96 154 4096 193 0 inf 0 0x10000 100.0%
ksiginfo 136 207 0 147 60 27 5 2 3 4096 3 0 inf 0 0x10040 66.4%
ktrace 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
kva-12288 12288 115 0 70 45 18 4 1 3 262144 4 0 inf 0 0x10e00 70.3%
kva-16384 16384 81 0 59 22 10 4 2 2 262144 4 0 inf 0 0x10e00 68.8%
kva-20480 20480 83 0 55 28 8 7 4 3 262144 7 0 inf 0 0x10e00 72.9%
kva-24576 24576 30 0 19 11 9 4 2 2 262144 3 0 inf 0 0x10e00 51.6%
kva-28672 28672 13 0 11 2 7 3 2 1 262144 2 0 inf 0 0x10e00 21.9%
kva-32768 32768 8 0 6 2 6 1 0 1 262144 1 0 inf 0 0x10e00 25.0%
kva-36864 36864 17 0 6 11 3 3 1 2 262144 3 0 inf 0 0x10e00 77.3%
kva-4096 4096 0 0 0 0 0 0 0 0 262144 0 0 inf 0 0x10e00 ---
kva-40960 40960 3 0 2 1 5 1 0 1 262144 1 0 inf 0 0x10e00 15.6%
kva-49152 49152 5 0 4 1 4 1 0 1 262144 1 0 inf 0 0x10e00 18.8%
kva-65536 65536 4 0 4 0 0 1 1 0 262144 1 0 inf 0 0x10e00 ---
kva-8192 8192 105 0 58 47 17 3 1 2 262144 3 0 inf 0 0x10e00 73.4%
l2cap_pdu 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
l2cap_req 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsdinopl 264 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsinoextpl 200 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsinopl 224 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfslbnpool 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
llentrypl 280 1 0 0 1 13 1 0 1 4096 1 0 inf 0 0x10040 6.8%
lockf 120 101 0 82 19 14 3 2 1 4096 3 0 inf 0 0x10040 55.7%
lwppl 1088 306 0 127 179 49 97 21 76 4096 97 0 inf 6 0x10040 62.6%
mbpl 520 806 0 415 391 162 83 4 79 4096 83 3 inf 2 0x10040 62.8%
mclpl 2112 598 0 280 318 8 473 147 326 4096 426 8 130541 8 0x10040 50.3%
mqmsgpl 1088 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
msdosfhpl 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
msdosnopl 208 63 0 63 0 0 4 4 0 4096 4 0 inf 0 0x10040 ---
mutex 64 235184 0 40771 194413 18905 3416 30 3386 4096 3416 0 inf 57 0x10040 89.7%
nchentry 192 228451 0 31717 196734 12174 9948 0 9948 4096 9948 0 inf 0 0x10040 92.7%
nfsnodepl 280 3 0 1 2 12 1 0 1 4096 1 0 inf 0 0x10040 13.7%
nfsreqcachepl 104 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
nfsrvdescpl 256 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
nfsvapl 184 3 0 1 2 19 1 0 1 4096 1 0 inf 0 0x10040 9.0%
npfcn4pl 144 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npfcn6pl 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npfnatpl 96 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npftblpl 48 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
pcache 2752 86 0 4 82 0 82 0 82 4096 82 0 inf 0 0x10040 67.2%
pcachecpu 64 267 0 0 267 48 5 0 5 4096 5 0 inf 0 0x10040 83.4%
pcglarge 1088 6709 0 6306 403 2 1643 1508 135 4096 1516 0 inf 0 0x10040 79.3%
pcgnormal 320 50648 0 42378 8270 2698 1462 548 914 4096 1380 0 inf 0 0x10040 70.7%
pdict128 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
pdict16 80 301 0 252 49 1 2 1 1 4096 2 0 inf 0 0x10040 95.7%
pdict32 96 15 0 2 13 29 1 0 1 4096 1 0 inf 0 0x10040 30.5%
pdppl 4096 181 0 87 94 0 179 85 94 4096 167 0 inf 0 0x10000 100.0%
pewpl 32 0 0 0 0 126 1 0 1 4096 1 1 1 1 0x00040 0.0%
phpool-1024 184 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-128 72 60 0 0 60 52 2 0 2 4096 2 0 inf 0 0x10040 52.7%
phpool-2048 312 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
phpool-256 88 40 0 1 39 6 1 0 1 4096 1 0 inf 0 0x10040 83.8%
phpool-4096 568 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-512 120 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-64 64 407275 0 63981 343294 56 5493 43 5450 4096 5450 0 inf 0 0x10040 98.4%
phpool-64 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
piperd 320 163 0 82 81 63 12 0 12 4096 12 0 inf 0 0x10040 52.7%
pipewr 320 186 0 108 78 78 14 1 13 4096 14 0 inf 0 0x10040 46.9%
plimitpl 240 61 0 39 22 26 4 1 3 4096 4 0 inf 0 0x10040 43.0%
pmappl 512 181 0 87 94 74 21 0 21 4096 21 0 inf 0 0x10000 56.0%
pnbufpl 1032 189 0 142 47 1 37 21 16 4096 30 0 inf 0 0x10040 74.0%
procpl 896 127 0 39 88 28 30 1 29 4096 30 0 inf 0 0x10040 66.4%
proparay 56 130 0 13 117 27 2 0 2 4096 2 0 inf 0 0x00040 80.0%
propdata 48 1 0 0 1 83 1 0 1 4096 1 0 inf 0 0x00040 1.2%
propdict 56 507 0 156 351 9 6 1 5 4096 6 0 inf 0 0x00040 96.0%
propnmbr 64 47 0 11 36 27 1 0 1 4096 1 0 inf 0 0x10040 56.2%
propstng 48 860 0 290 570 18 8 1 7 4096 8 0 inf 0 0x00040 95.4%
pstatspl 456 128 0 40 88 32 15 0 15 4096 15 0 inf 0 0x10040 65.3%
ptimerpl 328 101 0 84 17 7 4 2 2 4096 4 0 inf 0 0x10040 68.1%
ptimerspl 312 101 0 84 17 9 3 1 2 4096 3 0 inf 0 0x10000 64.7%
puffpnpl 248 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
puffprkl 120 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
puffvapl 184 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
pvpl 128 213753 0 110985 102768 21232 6789 2789 4000 4096 6789 0 inf 0 0x11040 80.3%
ractx 40 139076 0 20272 118804 174 1178 0 1178 4096 1178 0 inf 1 0x00040 98.5%
radixnode 192 85018 0 43075 41943 7092 3340 1005 2335 4096 3340 0 inf 1 0x11040 84.2%
raidpsspl 200 0 0 0 0 20 1 0 1 4096 1 1 2 1 0x10040 0.0%
rf_alloclist_pl 264 1962391 0 1962390 1 104 13 6 7 4096 7 5 18 6 0x10040 0.9%
rf_asm_pl 496 1962390 0 1962390 0 96 24 12 12 4096 12 8 24 12 0x10040 0.0%
rf_asmhdr_pl 32 915977 0 915977 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_asmhle_pl 24 0 0 0 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_callbackfpl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_callbackvpl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_dagh_pl 136 1962390 0 1962390 0 87 6 3 3 4096 3 2 5 3 0x10040 0.0%
rf_daglist_pl 336 1962390 0 1962390 0 72 14 8 6 4096 6 3 11 6 0x10040 0.0%
rf_dagnode_pl 680 8187823 0 8187823 0 252 107 65 42 4096 47 22 86 42 0x10000 0.0%
rf_dagpcache_pl 720 0 0 0 0 10 2 0 2 4096 2 2 26 2 0x10040 0.0%
rf_dqd_pl 208 2300653 0 2300653 0 114 12 6 6 4096 7 4 14 6 0x10040 0.0%
rf_fss_pl 48 0 0 0 0 84 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_funclist_pl 24 1962390 0 1962390 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_mcpair_pl 48 0 0 0 0 84 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_pda_pl 64 3924780 0 3924780 0 126 2 0 2 4096 2 2 4 2 0x10040 0.0%
rf_rad_pl 488 915977 0 915977 0 40 6 1 5 4096 5 4 16 5 0x10040 0.0%
rf_reconbuffer_ 112 0 0 0 0 36 1 0 1 4096 1 1 2 1 0x10040 0.0%
rf_revent_pl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_stripelock_p 56 1962390 0 1962390 0 72 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_vfple_pl 24 0 0 0 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_vple_pl 24 30 0 0 30 138 1 0 1 4096 1 1 2 0 0x00040 17.6%
rfcomm_credit 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
rndctx 24 29 0 20 9 159 1 0 1 4096 1 0 inf 0 0x00040 5.3%
rndsample 544 65 0 40 25 10 6 1 5 4096 6 0 586 0 0x10040 66.4%
rndtemp 520 5 0 4 1 6 2 1 1 4096 1 0 inf 0 0x10040 12.7%
rtentpl 328 28 0 2 26 10 3 0 3 4096 3 0 inf 0 0x10040 69.4%
rttmrpl 72 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
rwlock 64 257353 0 45032 212321 7045 3519 37 3482 4096 3519 0 inf 0 0x10040 95.3%
sackholepl 40 17 0 17 0 101 2 1 1 4096 1 0 inf 1 0x00040 0.0%
scxspl 264 29890 0 29890 0 30 2 0 2 4096 2 2 inf 2 0x10040 0.0%
sigacts 3096 180 0 90 90 0 178 88 90 4096 170 0 inf 0 0x10040 75.6%
smbfsnopl 176 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
smbrqpl 296 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
smbt2pl 232 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
socket 616 492 0 157 335 43 75 12 63 4096 75 0 inf 0 0x10040 80.0%
swp vnd 312 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
swp vnx 40 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
synpl 320 29 0 29 0 0 1 1 0 4096 1 0 inf 0 0x10040 ---
tcpcbpl 840 829 0 815 14 2 33 29 4 4096 33 0 inf 0 0x10040 71.8%
tcpipqepl 64 508 0 508 0 0 1 1 0 4096 1 0 inf 0 0x10040 ---
thplthrd 80 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
tmpfs_dirent 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
tmpfs_node 224 1 0 0 1 17 1 0 1 4096 1 0 inf 0 0x10040 5.5%
tstile 128 298 0 116 182 128 10 0 10 4096 10 0 inf 0 0x10040 56.9%
uaoeltpl 104 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
uarea 20480 208 0 116 92 0 206 114 92 20480 200 0 inf 0 0x10c00 100.0%
uareasys 20480 93 0 3 90 0 93 3 90 20480 93 0 inf 0 0x10c00 100.0%
ufsdir 272 9 0 7 2 13 2 1 1 4096 1 0 inf 0 0x10000 13.3%
ufsdq 88 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
vcachepl 640 267595 0 58727 208868 1444 35804 752 35052 4096 35804 0 inf 240 0x10040 93.1%
vmembt 64 23267 0 8983 14284 2411 265 0 265 4096 265 0 inf 0 0x10040 84.2%
vmmpepl 192 23808 0 12766 11042 11113 1107 52 1055 4096 1107 0 inf 0 0x10040 49.1%
vmsppl 360 176 0 86 90 64 16 2 14 4096 16 0 inf 0 0x10040 56.5%
wapbldealloc 40 1022 0 1022 0 101 5 4 1 4096 4 0 inf 1 0x00040 0.0%
wapblentrypl 48 3749 0 3749 0 84 1 0 1 4096 1 0 inf 1 0x00040 0.0%
wapblinopl 40 140898 0 140845 53 149 2 0 2 4096 2 0 inf 0 0x00040 25.9%
Totals 37908998 0 31685278 6223720 221228 589460 141572 447888
In use 2041819K, total allocated 2400228K; utilization 85.1%
-------
before building the distribution
$ vmstat
procs memory page disks faults cpu
r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
1 0 494576 781856 867 1 0 2 44 62 3 23 143 2886 674 1 0 99
$ vmstat -mvW > no2
Memory resource pool statistics
Name Size Requests Fail Releases InUse Avail Pgreq Pgrel Npage PageSz Hiwat Minpg Maxpg Idle Flags Util
ah_tdb_crypto 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
aio_jobs_pool 136 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
aio_lio_pool 48 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
amappl 88 13244 0 6900 6344 1441 175 2 173 4096 175 0 inf 0 0x10040 78.8%
anonpl 40 555865 0 202748 353117 80 3705 208 3497 4096 3497 0 inf 0 0x01040 98.6%
ataspl 160 3840941 0 3840941 0 75 5 2 3 4096 3 0 inf 3 0x10040 0.0%
biopl 304 49975 0 43487 6488 8111 2662 1539 1123 4096 1558 0 inf 623 0x10040 42.9%
brtpl 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
buf16k 16392 11747 0 11033 714 3 3236 2997 239 65536 1505 1 1 1 0x10040 74.7%
buf1k 1032 59 0 59 0 3 19 18 1 4096 19 1 1 1 0x10040 0.0%
buf2k 2056 36566 0 36299 267 1 36567 36299 268 4096 17763 1 1 1 0x10040 50.0%
buf32k 32776 42414 0 34726 7688 1 42415 34726 7689 65536 9818 1 1 1 0x10040 50.0%
buf4k 4096 147134 0 71769 75365 1 147135 71769 75366 4096 88754 1 1 1 0x10000 100.0%
buf512b 520 1252 0 1250 2 5 138 137 1 4096 137 1 1 0 0x10040 25.4%
buf64k 65536 240 0 232 8 1 241 232 9 65536 241 1 1 1 0x10000 88.9%
buf8k 8200 2079 0 1917 162 104 171 133 38 65536 80 1 1 0 0x10040 53.3%
bufpl 304 95954 0 7819 88135 6245 7369 109 7260 4096 7369 0 inf 0 0x10040 90.1%
carp 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
carp6 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
ccdbuf 336 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cd9660nopl 208 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptdesc 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptkop 384 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptop 320 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
csepl 232 0 0 0 0 323 19 0 19 4096 19 19 inf 19 0x10040 0.0%
cwdi 64 363 0 259 104 148 4 0 4 4096 4 0 inf 1 0x10040 40.6%
dbregs 144 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
dirhepl 48 0 0 0 0 0 0 0 0 4096 0 0 313 0 0x00040 ---
dirhpl 304 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
efsinopl 248 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ehcixfer 384 7 0 3 4 6 1 0 1 4096 1 0 inf 0 0x10040 37.5%
ehcixfer 384 18 0 14 4 6 2 1 1 4096 2 0 inf 0 0x10040 37.5%
esp_tdb_crypto 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
execargs 262144 24495 0 24495 0 1 10 9 1 262144 4 0 16 1 0x10c00 0.0%
ext2fsinopl 256 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
extent 48 9 0 8 1 83 1 0 1 4096 1 0 inf 0 0x00040 1.2%
fcrpl 184 31 0 30 1 83 4 0 4 4096 4 4 inf 3 0x10040 1.1%
fdfile 64 3623 0 1353 2270 439 43 0 43 4096 43 0 inf 0 0x11040 82.5%
ffsdino1 136 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ffsdino2 264 251767 0 76625 175142 33733 14321 396 13925 4096 14320 0 inf 16 0x10040 81.1%
ffsino 256 251607 0 76465 175142 33738 13427 372 13055 4096 13425 0 inf 14 0x10000 83.8%
file 128 2408 0 684 1724 198 64 2 62 4096 64 0 inf 0 0x10040 86.9%
filedesc 832 359 0 255 104 92 55 6 49 4096 49 0 inf 13 0x10040 43.1%
icmp 32 2 0 2 0 0 1 1 0 4096 1 0 inf 0 0x00040 ---
icmp6 32 6 0 6 0 0 2 2 0 4096 1 0 inf 0 0x00040 ---
igmppl 40 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
in6pcbpl 280 3706 0 3693 13 1 3 2 1 4096 3 0 inf 0 0x10040 88.9%
inmltpl 56 2 0 0 2 70 1 0 1 4096 1 0 inf 0 0x00040 2.7%
inpcbpl 240 3607 0 3584 23 9 9 7 2 4096 9 0 inf 0 0x10040 67.4%
ipcomp_tdb_cryp 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ipfrenpl 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
kcpuset 64 575 0 175 400 41 7 0 7 4096 7 0 inf 0 0x10040 89.3%
kcredpl 192 601 0 200 401 124 25 0 25 4096 25 0 inf 1 0x10040 75.2%
kmem-00008 8 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10400 ---
kmem-00016 16 12577 0 6527 6050 3934 40 1 39 4096 40 0 inf 0 0x10400 60.6%
kmem-00032 32 9464 0 4553 4911 2769 60 0 60 4096 60 0 inf 0 0x10400 63.9%
kmem-00064 128 13384 0 8163 5221 1475 377 161 216 4096 377 0 inf 0 0x10040 75.5%
kmem-00128 192 11378 0 3200 8178 12 391 1 390 4096 390 0 inf 0 0x10040 98.3%
kmem-00192 256 6067336 1 222 6067114 6 379195 0 379195 4096 379195 0 inf 0 0x10000 100.0%
kmem-00256 320 1364 0 567 797 7 75 8 67 4096 70 0 inf 0 0x10040 92.9%
kmem-00320 384 1628 0 479 1149 1 120 5 115 4096 115 0 inf 0 0x10040 93.7%
kmem-00384 448 900 0 228 672 3 75 0 75 4096 75 0 inf 0 0x10040 98.0%
kmem-00448 512 696 0 244 452 4 60 3 57 4096 57 0 inf 0 0x10000 99.1%
kmem-00512 576 589 0 202 387 5 57 1 56 4096 56 0 inf 0 0x10040 97.2%
kmem-00768 832 1727 0 471 1256 0 357 43 314 4096 314 0 inf 0 0x10040 81.2%
kmem-01024 1088 4666 0 1654 3012 0 1295 291 1004 4096 1004 0 inf 0 0x10040 79.7%
kmem-02048 2112 2499 0 815 1684 0 2448 764 1684 4096 1684 0 inf 0 0x10040 51.6%
kmem-04096 4096 476 0 187 289 0 385 96 289 4096 289 0 inf 0 0x10000 100.0%
ksiginfo 136 326 0 238 88 28 6 2 4 4096 4 0 inf 0 0x10040 73.0%
ktrace 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
kva-12288 12288 270 0 165 105 21 7 1 6 262144 6 0 inf 1 0x10e00 82.0%
kva-16384 16384 215 0 147 68 28 8 2 6 262144 6 0 inf 1 0x10e00 70.8%
kva-20480 20480 228 0 143 85 35 14 4 10 262144 10 0 inf 2 0x10e00 66.4%
kva-24576 24576 111 0 41 70 0 9 2 7 262144 7 0 inf 0 0x10e00 93.8%
kva-28672 28672 30 0 28 2 16 4 2 2 262144 2 0 inf 1 0x10e00 10.9%
kva-32768 32768 20 0 18 2 14 2 0 2 262144 2 0 inf 1 0x10e00 12.5%
kva-36864 36864 19 0 8 11 3 3 1 2 262144 3 0 inf 0 0x10e00 77.3%
kva-4096 4096 0 0 0 0 0 0 0 0 262144 0 0 inf 0 0x10e00 ---
kva-40960 40960 4 0 3 1 5 1 0 1 262144 1 0 inf 0 0x10e00 15.6%
kva-49152 49152 5 0 5 0 5 1 0 1 262144 1 0 inf 1 0x10e00 0.0%
kva-65536 65536 4 0 4 0 0 1 1 0 262144 1 0 inf 0 0x10e00 ---
kva-8192 8192 260 0 143 117 11 5 1 4 262144 4 0 inf 0 0x10e00 91.4%
l2cap_pdu 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
l2cap_req 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsdinopl 264 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsinoextpl 200 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsinopl 224 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfslbnpool 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
llentrypl 280 1 0 0 1 13 1 0 1 4096 1 0 inf 0 0x10040 6.8%
lockf 120 111 0 96 15 18 3 2 1 4096 3 0 inf 0 0x10040 43.9%
lwppl 1088 417 0 223 194 91 116 21 95 4096 97 0 inf 18 0x10040 54.2%
mbpl 520 992 0 586 406 133 83 6 77 4096 83 3 inf 3 0x10040 66.9%
mclpl 2112 772 0 402 370 18 541 153 388 4096 426 8 130541 18 0x10040 49.2%
mqmsgpl 1088 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
msdosfhpl 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
msdosnopl 208 63 0 63 0 0 4 4 0 4096 4 0 inf 0 0x10040 ---
mutex 64 250040 0 74478 175562 34165 3416 87 3329 4096 3416 0 inf 1 0x10040 82.4%
nchentry 192 240642 0 31879 208763 166 9949 0 9949 4096 9949 0 inf 0 0x10040 98.4%
nfsnodepl 280 3 0 1 2 12 1 0 1 4096 1 0 inf 0 0x10040 13.7%
nfsreqcachepl 104 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
nfsrvdescpl 256 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
nfsvapl 184 3 0 1 2 19 1 0 1 4096 1 0 inf 0 0x10040 9.0%
npfcn4pl 144 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npfcn6pl 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npfnatpl 96 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npftblpl 48 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
pcache 2752 86 0 4 82 0 82 0 82 4096 82 0 inf 0 0x10040 67.2%
pcachecpu 64 267 0 0 267 48 5 0 5 4096 5 0 inf 0 0x10040 83.4%
pcglarge 1088 14696 0 9686 5010 0 3178 1508 1670 4096 1670 0 inf 0 0x10040 79.7%
pcgnormal 320 99658 0 97237 2421 2943 4687 4240 447 4096 4139 0 inf 245 0x10040 42.3%
pdict128 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
pdict16 80 301 0 252 49 1 2 1 1 4096 2 0 inf 0 0x10040 95.7%
pdict32 96 15 0 2 13 29 1 0 1 4096 1 0 inf 0 0x10040 30.5%
pdppl 4096 305 0 203 102 91 278 85 193 4096 193 0 inf 91 0x10000 52.8%
pewpl 32 0 0 0 0 126 1 0 1 4096 1 1 1 1 0x00040 0.0%
phpool-1024 184 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-128 72 60 0 0 60 52 2 0 2 4096 2 0 inf 0 0x10040 52.7%
phpool-2048 312 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
phpool-256 88 40 0 1 39 6 1 0 1 4096 1 0 inf 0 0x10040 83.8%
phpool-4096 568 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-512 120 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-64 64 541370 0 72809 468561 33 7481 43 7438 4096 7438 0 inf 0 0x10040 98.4%
phpool-64 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
piperd 320 272 0 210 62 94 13 0 13 4096 13 0 inf 2 0x10040 37.3%
pipewr 320 296 0 235 61 95 14 1 13 4096 14 0 inf 0 0x10040 36.7%
plimitpl 240 147 0 132 15 97 8 1 7 4096 7 0 inf 4 0x10040 12.6%
pmappl 512 305 0 203 102 98 25 0 25 4096 25 0 inf 2 0x10000 51.0%
pnbufpl 1032 310 0 283 27 66 52 21 31 4096 31 0 inf 22 0x10040 21.9%
procpl 896 236 0 141 95 97 49 1 48 4096 48 0 inf 14 0x10040 43.3%
proparay 56 130 0 13 117 27 2 0 2 4096 2 0 inf 0 0x00040 80.0%
propdata 48 1 0 0 1 83 1 0 1 4096 1 0 inf 0 0x00040 1.2%
propdict 56 507 0 156 351 9 6 1 5 4096 6 0 inf 0 0x00040 96.0%
propnmbr 64 47 0 11 36 27 1 0 1 4096 1 0 inf 0 0x10040 56.2%
propstng 48 860 0 290 570 18 8 1 7 4096 8 0 inf 0 0x00040 95.4%
pstatspl 456 237 0 142 95 97 24 0 24 4096 24 0 inf 6 0x10040 44.1%
ptimerpl 328 113 0 92 21 3 4 2 2 4096 4 0 inf 0 0x10040 84.1%
ptimerspl 312 113 0 92 21 5 3 1 2 4096 3 0 inf 0 0x10000 80.0%
puffpnpl 248 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
puffprkl 120 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
puffvapl 184 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
pvpl 128 254945 0 223567 31378 107254 7261 2789 4472 4096 6789 0 inf 314 0x11040 21.9%
ractx 40 147873 0 66718 81155 16714 1178 209 969 4096 1178 0 inf 0 0x00040 81.8%
radixnode 192 88133 0 68192 19941 26826 3340 1113 2227 4096 3340 0 inf 54 0x11040 42.0%
raidpsspl 200 0 0 0 0 20 1 0 1 4096 1 1 2 1 0x10040 0.0%
rf_alloclist_pl 264 2725865 0 2725864 1 104 17 10 7 4096 7 5 18 6 0x10040 0.9%
rf_asm_pl 496 2725864 0 2725864 0 96 32 20 12 4096 12 8 24 12 0x10040 0.0%
rf_asmhdr_pl 32 1326198 0 1326198 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_asmhle_pl 24 0 0 0 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_callbackfpl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_callbackvpl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_dagh_pl 136 2725864 0 2725864 0 87 8 5 3 4096 3 2 5 3 0x10040 0.0%
rf_daglist_pl 336 2725864 0 2725864 0 72 17 11 6 4096 6 3 11 6 0x10040 0.0%
rf_dagnode_pl 680 11740276 0 11740276 0 282 157 110 47 4096 47 22 86 47 0x10000 0.0%
rf_dagpcache_pl 720 0 0 0 0 10 2 0 2 4096 2 2 26 2 0x10040 0.0%
rf_dqd_pl 208 3562684 0 3562684 0 133 19 12 7 4096 7 4 14 7 0x10040 0.0%
rf_fss_pl 48 0 0 0 0 84 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_funclist_pl 24 2725864 0 2725864 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_mcpair_pl 48 0 0 0 0 84 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_pda_pl 64 5451728 0 5451728 0 126 2 0 2 4096 2 2 4 2 0x10040 0.0%
rf_rad_pl 488 1326198 0 1326198 0 40 7 2 5 4096 5 4 16 5 0x10040 0.0%
rf_reconbuffer_ 112 0 0 0 0 36 1 0 1 4096 1 1 2 1 0x10040 0.0%
rf_revent_pl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_stripelock_p 56 2725864 0 2725864 0 72 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_vfple_pl 24 0 0 0 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_vple_pl 24 30 0 0 30 138 1 0 1 4096 1 1 2 0 0x00040 17.6%
rfcomm_credit 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
rndctx 24 32 0 24 8 160 1 0 1 4096 1 0 inf 0 0x00040 4.7%
rndsample 544 83 0 55 28 7 6 1 5 4096 6 0 586 0 0x10040 74.4%
rndtemp 520 8 0 8 0 7 2 1 1 4096 1 0 inf 1 0x10040 0.0%
rtentpl 328 28 0 2 26 10 3 0 3 4096 3 0 inf 0 0x10040 69.4%
rttmrpl 72 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
rwlock 64 260546 0 78348 182198 37168 3519 37 3482 4096 3519 0 inf 0 0x10040 81.8%
sackholepl 40 23 0 23 0 0 2 2 0 4096 1 0 inf 0 0x00040 ---
scxspl 264 29890 0 29890 0 30 2 0 2 4096 2 2 inf 2 0x10040 0.0%
sigacts 3096 352 0 181 171 24 283 88 195 4096 195 0 inf 24 0x10040 66.3%
smbfsnopl 176 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
smbrqpl 296 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
smbt2pl 232 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
socket 616 514 0 177 337 41 75 12 63 4096 75 0 inf 0 0x10040 80.4%
swp vnd 312 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
swp vnx 40 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
synpl 320 29 0 29 0 0 1 1 0 4096 1 0 inf 0 0x10040 ---
tcpcbpl 840 840 0 826 14 2 33 29 4 4096 33 0 inf 0 0x10040 71.8%
tcpipqepl 64 514 0 514 0 0 2 2 0 4096 1 0 inf 0 0x10040 ---
thplthrd 80 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
tmpfs_dirent 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
tmpfs_node 224 1 0 0 1 17 1 0 1 4096 1 0 inf 0 0x10040 5.5%
tstile 128 463 0 198 265 45 10 0 10 4096 10 0 inf 0 0x10040 82.8%
uaoeltpl 104 79 0 36 43 33 2 0 2 4096 2 0 inf 0 0x10040 54.6%
uarea 20480 368 0 198 170 20 304 114 190 20480 200 0 inf 20 0x10c00 89.5%
uareasys 20480 93 0 3 90 0 93 3 90 20480 93 0 inf 0 0x10c00 100.0%
ufsdir 272 15 0 11 4 11 2 1 1 4096 1 0 inf 0 0x10000 26.6%
ufsdq 88 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
vcachepl 640 267835 0 92613 175222 33692 35804 985 34819 4096 35804 0 inf 177 0x10040 78.6%
vmembt 64 37812 0 16940 20872 5084 412 0 412 4096 412 0 inf 0 0x10040 79.2%
vmmpepl 192 37948 0 18322 19626 2529 1107 52 1055 4096 1107 0 inf 3 0x10040 87.2%
vmsppl 360 340 0 174 166 32 20 2 18 4096 18 0 inf 0 0x10040 81.1%
wapbldealloc 40 55995 0 55995 0 101 10 9 1 4096 6 0 inf 1 0x00040 0.0%
wapblentrypl 48 4389 0 4389 0 84 1 0 1 4096 1 0 inf 1 0x00040 0.0%
wapblinopl 40 908209 0 908156 53 149 2 0 2 4096 2 0 inf 0 0x00040 25.9%
Totals 54483904 1 46093266 8390638 364760 740626 162158 578468
In use 2453135K, total allocated 2800216K; utilization 87.6%
-------
After stopping the distribution build
$ vmstat
procs memory page disks faults cpu
r b avm fre flt re pi po fr sr w0 w1 in sy cs us sy id
0 4 1114236 6520 3327 57 36 41 139 290 17 51 278 5451 1106 2 1 97
$ vmstat -mvW
Memory resource pool statistics
Name Size Requests Fail Releases InUse Avail Pgreq Pgrel Npage PageSz Hiwat Minpg Maxpg Idle Flags Util
ah_tdb_crypto 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
aio_jobs_pool 136 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
aio_lio_pool 48 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
amappl 88 21768 0 16613 5155 3125 210 26 184 4096 208 0 inf 0 0x10040 60.2%
anonpl 40 1841955 0 1561284 280671 8 9372 6593 2779 4096 6821 0 inf 0 0x01040 98.6%
ataspl 160 11443306 0 11443305 1 49 16 14 2 4096 3 0 inf 1 0x10040 2.0%
biopl 304 76780 0 76638 142 1 3377 3366 11 4096 1558 0 inf 0 0x10040 95.8%
brtpl 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
buf16k 16392 12701 0 12556 145 92 3285 3206 79 65536 1505 1 1 0 0x10040 45.9%
buf1k 1032 59 0 59 0 3 19 18 1 4096 19 1 1 1 0x10040 0.0%
buf2k 2056 36687 0 36639 48 1 36688 36639 49 4096 17763 1 1 1 0x10040 49.2%
buf32k 32776 46547 0 44012 2535 1 46548 44012 2536 65536 9993 1 1 1 0x10040 50.0%
buf4k 4096 150428 0 140934 9494 1 150429 140934 9495 4096 88754 1 1 1 0x10000 100.0%
buf512b 520 1570 0 1569 1 6 138 137 1 4096 137 1 1 0 0x10040 12.7%
buf64k 65536 240 0 232 8 1 241 232 9 65536 241 1 1 1 0x10000 88.9%
buf8k 8200 2233 0 2154 79 110 171 144 27 65536 80 1 1 0 0x10040 36.6%
bufpl 304 97802 0 85412 12390 35372 7369 3695 3674 4096 7369 0 inf 183 0x10040 25.0%
carp 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
carp6 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
ccdbuf 336 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cd9660nopl 208 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptdesc 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptkop 384 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
cryptop 320 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
csepl 232 0 0 0 0 323 19 0 19 4096 19 19 inf 19 0x10040 0.0%
cwdi 64 1021 0 918 103 86 5 2 3 4096 4 0 inf 0 0x10040 53.6%
dbregs 144 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
dirhepl 48 0 0 0 0 0 0 0 0 4096 0 0 313 0 0x00040 ---
dirhpl 304 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
efsinopl 248 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ehcixfer 384 7 0 3 4 6 1 0 1 4096 1 0 inf 0 0x10040 37.5%
ehcixfer 384 23 0 19 4 6 2 1 1 4096 2 0 inf 0 0x10040 37.5%
esp_tdb_crypto 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
execargs 262144 432267 0 432267 0 1 46 45 1 262144 4 0 16 1 0x10c00 0.0%
ext2fsinopl 256 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
extent 48 9 0 8 1 83 1 0 1 4096 1 0 inf 0 0x00040 1.2%
fcrpl 184 31 0 30 1 83 4 0 4 4096 4 4 inf 3 0x10040 1.1%
fdfile 64 6281 0 4099 2182 527 43 0 43 4096 43 0 inf 0 0x11040 79.3%
ffsdino1 136 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ffsdino2 264 320936 0 115170 205766 4204 14396 398 13998 4096 14320 0 inf 0 0x10040 94.7%
ffsino 256 316048 0 110282 205766 4234 13497 372 13125 4096 13425 0 inf 0 0x10000 98.0%
file 128 3840 0 2223 1617 305 65 3 62 4096 64 0 inf 0 0x10040 81.5%
filedesc 832 1025 0 922 103 49 74 36 38 4096 53 0 inf 0 0x10040 55.1%
icmp 32 2 0 2 0 0 1 1 0 4096 1 0 inf 0 0x00040 ---
icmp6 32 8 0 8 0 0 3 3 0 4096 1 0 inf 0 0x00040 ---
igmppl 40 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
in6pcbpl 280 4328 0 4315 13 1 3 2 1 4096 3 0 inf 0 0x10040 88.9%
inmltpl 56 2 0 0 2 70 1 0 1 4096 1 0 inf 0 0x00040 2.7%
inpcbpl 240 4128 0 4106 22 10 9 7 2 4096 9 0 inf 0 0x10040 64.5%
ipcomp_tdb_cryp 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
ipfrenpl 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
kcpuset 64 1033 0 813 220 221 8 1 7 4096 8 0 inf 0 0x10040 49.1%
kcredpl 192 868 0 484 384 120 25 1 24 4096 25 0 inf 0 0x10040 75.0%
kmem-00008 8 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10400 ---
kmem-00016 16 20492 0 15691 4801 5183 40 1 39 4096 40 0 inf 0 0x10400 48.1%
kmem-00032 32 17486 0 13197 4289 3391 60 0 60 4096 60 0 inf 0 0x10400 55.8%
kmem-00064 128 22091 0 17137 4954 1649 377 164 213 4096 377 0 inf 0 0x10040 72.7%
kmem-00128 192 31475 0 23722 7753 836 417 8 409 4096 414 0 inf 0 0x10040 88.9%
kmem-00192 256 13022181 10 3403 13018778 6 813674 0 813674 4096 813674 0 inf 0 0x10000 100.0%
kmem-00256 320 3300 0 2702 598 194 120 54 66 4096 111 0 inf 0 0x10040 70.8%
kmem-00320 384 4077 0 3206 871 229 191 81 110 4096 183 0 inf 0 0x10040 74.2%
kmem-00384 448 4621 0 4211 410 94 159 103 56 4096 142 0 inf 0 0x10040 80.1%
kmem-00448 512 4278 0 4002 276 28 166 128 38 4096 132 0 inf 0 0x10000 90.8%
kmem-00512 576 1910 0 1747 163 82 137 102 35 4096 118 0 inf 0 0x10040 65.5%
kmem-00768 832 5234 0 4432 802 34 803 594 209 4096 649 0 inf 0 0x10040 77.9%
kmem-01024 1088 17119 0 14256 2863 23 3050 2088 962 4096 1665 0 inf 7 0x10040 79.1%
kmem-02048 2112 7454 0 6099 1355 0 5321 3966 1355 4096 3120 0 inf 0 0x10040 51.6%
kmem-04096 4096 1144 0 1042 102 1 568 465 103 4096 335 0 inf 1 0x10000 99.0%
ksiginfo 136 1906 0 1873 33 25 11 9 2 4096 4 0 inf 0 0x10040 54.8%
ktrace 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
kva-12288 12288 461 0 443 18 45 8 5 3 262144 6 0 inf 0 0x10e00 28.1%
kva-16384 16384 400 0 396 4 28 9 7 2 262144 6 0 inf 0 0x10e00 12.5%
kva-20480 20480 393 0 370 23 13 16 13 3 262144 10 0 inf 0 0x10e00 59.9%
kva-24576 24576 249 0 241 8 2 12 11 1 262144 10 0 inf 0 0x10e00 75.0%
kva-28672 28672 73 0 73 0 0 9 9 0 262144 5 0 inf 0 0x10e00 ---
kva-32768 32768 69 0 68 1 7 3 2 1 262144 2 0 inf 0 0x10e00 12.5%
kva-36864 36864 23 0 12 11 3 3 1 2 262144 3 0 inf 0 0x10e00 77.3%
kva-4096 4096 0 0 0 0 0 0 0 0 262144 0 0 inf 0 0x10e00 ---
kva-40960 40960 5 0 4 1 5 1 0 1 262144 1 0 inf 0 0x10e00 15.6%
kva-49152 49152 5 0 5 0 0 1 1 0 262144 1 0 inf 0 0x10e00 ---
kva-65536 65536 4 0 4 0 0 1 1 0 262144 1 0 inf 0 0x10e00 ---
kva-8192 8192 408 0 374 34 30 5 3 2 262144 4 0 inf 0 0x10e00 53.1%
l2cap_pdu 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
l2cap_req 128 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsdinopl 264 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsinoextpl 200 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfsinopl 224 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
lfslbnpool 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
llentrypl 280 1 0 0 1 13 1 0 1 4096 1 0 inf 0 0x10040 6.8%
lockf 120 300 0 288 12 21 5 4 1 4096 3 0 inf 0 0x10040 35.2%
lwppl 1088 909 0 713 196 29 130 55 75 4096 98 0 inf 0 0x10040 69.4%
mbpl 520 3781 0 3445 336 35 93 40 53 4096 83 3 inf 1 0x10040 80.5%
mclpl 2112 3267 0 2959 308 16 1030 706 324 4096 426 8 130541 16 0x10040 49.0%
mqmsgpl 1088 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
msdosfhpl 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
msdosnopl 208 63 0 63 0 0 4 4 0 4096 4 0 inf 0 0x10040 ---
mutex 64 312788 0 106590 206198 4285 3428 87 3341 4096 3416 0 inf 0 0x10040 96.4%
nchentry 192 241413 0 32835 208578 351 9949 0 9949 4096 9949 0 inf 0 0x10040 98.3%
nfsnodepl 280 3 0 1 2 12 1 0 1 4096 1 0 inf 0 0x10040 13.7%
nfsreqcachepl 104 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
nfsrvdescpl 256 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
nfsvapl 184 3 0 1 2 19 1 0 1 4096 1 0 inf 0 0x10040 9.0%
npfcn4pl 144 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npfcn6pl 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npfnatpl 96 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
npftblpl 48 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
pcache 2752 86 0 4 82 0 82 0 82 4096 82 0 inf 0 0x10040 67.2%
pcachecpu 64 267 0 0 267 48 5 0 5 4096 5 0 inf 0 0x10040 83.4%
pcglarge 1088 40125 0 40108 17 25 8394 8380 14 4096 3420 0 inf 8 0x10040 32.3%
pcgnormal 320 156663 0 156468 195 45 6339 6319 20 4096 4139 0 inf 0 0x10040 76.2%
pdict128 192 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
pdict16 80 321 0 272 49 1 3 2 1 4096 2 0 inf 0 0x10040 95.7%
pdict32 96 15 0 2 13 29 1 0 1 4096 1 0 inf 0 0x10040 30.5%
pdppl 4096 950 0 847 103 0 478 375 103 4096 211 0 inf 0 0x10000 100.0%
pewpl 32 0 0 0 0 126 1 0 1 4096 1 1 1 1 0x00040 0.0%
phpool-1024 184 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-128 72 60 0 0 60 52 2 0 2 4096 2 0 inf 0 0x10040 52.7%
phpool-2048 312 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
phpool-256 88 40 0 1 39 6 1 0 1 4096 1 0 inf 0 0x10040 83.8%
phpool-4096 568 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-512 120 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
phpool-64 64 980595 0 143772 836823 6 13326 43 13283 4096 13283 0 inf 0 0x10040 98.4%
phpool-64 64 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
piperd 320 665 0 600 65 79 15 3 12 4096 15 0 inf 0 0x10040 42.3%
pipewr 320 755 0 692 63 93 16 3 13 4096 15 0 inf 0 0x10040 37.9%
plimitpl 240 542 0 524 18 30 12 9 3 4096 7 0 inf 0 0x10040 35.2%
pmappl 512 950 0 847 103 89 33 9 24 4096 27 0 inf 0 0x10000 53.6%
pnbufpl 1032 1384 0 1372 12 0 105 101 4 4096 33 0 inf 0 0x10040 75.6%
procpl 896 715 0 613 102 26 60 28 32 4096 50 0 inf 0 0x10040 69.7%
proparay 56 134 0 17 117 27 2 0 2 4096 2 0 inf 0 0x00040 80.0%
propdata 48 1 0 0 1 83 1 0 1 4096 1 0 inf 0 0x00040 1.2%
propdict 56 523 0 172 351 9 6 1 5 4096 6 0 inf 0 0x00040 96.0%
propnmbr 64 49 0 13 36 27 1 0 1 4096 1 0 inf 0 0x10040 56.2%
propstng 48 892 0 322 570 18 8 1 7 4096 8 0 inf 0 0x00040 95.4%
pstatspl 456 720 0 618 102 42 29 11 18 4096 25 0 inf 0 0x10040 63.1%
ptimerpl 328 117 0 96 21 3 4 2 2 4096 4 0 inf 0 0x10040 84.1%
ptimerspl 312 117 0 96 21 5 3 1 2 4096 3 0 inf 0 0x10000 80.0%
puffpnpl 248 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
puffprkl 120 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
puffvapl 184 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
pvpl 128 300499 0 288687 11812 69005 7261 4654 2607 4096 6789 0 inf 329 0x11040 14.2%
ractx 40 230136 0 138235 91901 9 1478 568 910 4096 1178 0 inf 0 0x00040 98.6%
radixnode 192 145216 0 140681 4535 18565 3340 2240 1100 4096 3340 0 inf 25 0x11040 19.3%
raidpsspl 200 0 0 0 0 20 1 0 1 4096 1 1 2 1 0x10040 0.0%
rf_alloclist_pl 264 5193794 0 5193793 1 74 68 63 5 4096 7 5 18 4 0x10040 1.3%
rf_asm_pl 496 5193793 0 5193793 0 88 161 150 11 4096 12 8 24 11 0x10040 0.0%
rf_asmhdr_pl 32 2170796 0 2170796 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_asmhle_pl 24 0 0 0 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_callbackfpl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_callbackvpl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_dagh_pl 136 5193793 0 5193793 0 58 17 15 2 4096 3 2 5 2 0x10040 0.0%
rf_daglist_pl 336 5193793 0 5193793 0 60 108 103 5 4096 6 3 11 5 0x10040 0.0%
rf_dagnode_pl 680 22834855 0 22834855 0 222 872 835 37 4096 47 22 86 37 0x10000 0.0%
rf_dagpcache_pl 720 0 0 0 0 10 2 0 2 4096 2 2 26 2 0x10040 0.0%
rf_dqd_pl 208 7253476 0 7253476 0 114 116 110 6 4096 7 4 14 6 0x10040 0.0%
rf_fss_pl 48 0 0 0 0 84 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_funclist_pl 24 5193793 0 5193793 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_mcpair_pl 48 0 0 0 0 84 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_pda_pl 64 10387586 0 10387586 0 126 2 0 2 4096 2 2 4 2 0x10040 0.0%
rf_rad_pl 488 2170796 0 2170796 0 40 44 39 5 4096 5 4 16 5 0x10040 0.0%
rf_reconbuffer_ 112 0 0 0 0 36 1 0 1 4096 1 1 2 1 0x10040 0.0%
rf_revent_pl 32 0 0 0 0 126 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_stripelock_p 56 5193793 0 5193793 0 72 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_vfple_pl 24 0 0 0 0 168 1 0 1 4096 1 1 2 1 0x00040 0.0%
rf_vple_pl 24 30 0 0 30 138 1 0 1 4096 1 1 2 0 0x00040 17.6%
rfcomm_credit 32 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
rndctx 24 37 0 29 8 160 1 0 1 4096 1 0 inf 0 0x00040 4.7%
rndsample 544 206 0 187 19 16 6 1 5 4096 6 0 586 0 0x10040 50.5%
rndtemp 520 13 0 13 0 0 3 3 0 4096 1 0 inf 0 0x10040 ---
rtentpl 328 28 0 2 26 10 3 0 3 4096 3 0 inf 0 0x10040 69.4%
rttmrpl 72 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
rwlock 64 351347 0 140127 211220 8146 3519 37 3482 4096 3519 0 inf 0 0x10040 94.8%
sackholepl 40 23 0 23 0 0 2 2 0 4096 1 0 inf 0 0x00040 ---
scxspl 264 29890 0 29890 0 30 2 0 2 4096 2 2 inf 2 0x10040 0.0%
sigacts 3096 952 0 849 103 0 426 323 103 4096 210 0 inf 0 0x10040 75.6%
smbfsnopl 176 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
smbrqpl 296 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
smbt2pl 232 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
socket 616 740 0 414 326 52 75 12 63 4096 75 0 inf 0 0x10040 77.8%
swp vnd 312 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10000 ---
swp vnx 40 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
synpl 320 29 0 29 0 0 1 1 0 4096 1 0 inf 0 0x10040 ---
tcpcbpl 840 848 0 835 13 3 33 29 4 4096 33 0 inf 0 0x10040 66.7%
tcpipqepl 64 515 0 515 0 0 3 3 0 4096 1 0 inf 0 0x10040 ---
thplthrd 80 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
tmpfs_dirent 56 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x00040 ---
tmpfs_node 224 1 0 0 1 17 1 0 1 4096 1 0 inf 0 0x10040 5.5%
tstile 128 951 0 755 196 114 10 0 10 4096 10 0 inf 0 0x10040 61.2%
uaoeltpl 104 226 0 179 47 29 3 1 2 4096 3 0 inf 0 0x10040 59.7%
uarea 20480 850 0 744 106 0 415 309 106 20480 202 0 inf 0 0x10c00 100.0%
uareasys 20480 93 0 3 90 0 93 3 90 20480 93 0 inf 0 0x10c00 100.0%
ufsdir 272 70 0 69 1 14 12 11 1 4096 1 0 inf 0 0x10000 6.6%
ufsdq 88 0 0 0 0 0 0 0 0 4096 0 0 inf 0 0x10040 ---
vcachepl 640 345989 0 140179 205810 4130 35991 1001 34990 4096 35804 0 inf 0 0x10040 91.9%
vmembt 64 62704 0 34209 28495 926 467 0 467 4096 467 0 inf 0 0x10040 95.3%
vmmpepl 192 62637 0 48291 14346 8229 1250 175 1075 4096 1194 0 inf 0 0x10040 62.6%
vmsppl 360 941 0 838 103 84 23 6 17 4096 20 0 inf 0 0x10040 53.3%
wapbldealloc 40 70640 0 70640 0 0 16 16 0 4096 6 0 inf 0 0x00040 ---
wapblentrypl 48 6985 0 6985 0 84 4 3 1 4096 1 0 inf 1 0x00040 0.0%
wapblinopl 40 2332333 0 2332276 57 145 2 0 2 4096 2 0 inf 0 0x00040 27.8%
Totals 109665202 10 94066088 15599114 178659 1210640 274601 936039
In use 3770733K, total allocated 3908712K; utilization 96.5%
From: Andrew Doran <ad@netbsd.org>
To: MLH <mlh@goathill.org>
Cc: gnats-bugs@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 24 Mar 2020 21:23:35 +0000
> kmem-00192 256 4088775 0 182 4088593 15 255538 0 255538 4096 255538 0 inf 0 0x10000 100.0%
That's about a gigabyte of kernel memory leaked. I have a radeon card here.
I'll try to reproduce it.
Andrew
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 24 Mar 2020 17:34:08 -0400 (EDT)
Andrew Doran wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: Andrew Doran <ad@netbsd.org>
> To: MLH <mlh@goathill.org>
> Cc: gnats-bugs@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Tue, 24 Mar 2020 21:23:35 +0000
>
> > kmem-00192 256 4088775 0 182 4088593 15 255538 0 255538 4096 255538 0 inf 0 0x10000 100.0%
>
> That's about a gigabyte of kernel memory leaked. I have a radeon card here.
> I'll try to reproduce it.
That's much less than I've seen recently. The really good thing is
that now the system isn't locking up or crashing. It just grinds
to a halt but with about 20 minutes of patient work :^) I can stop
X, and then either get a console up to do some housecleaning and
reboot or ssh in from my phone to reboot. That's a major improvement
over the last few weeks! Yay!! Thanks!
I say maybe radeon because the more the display changes, the more
memory is leaked. I can just move the mouse around and activate
different windows and watch it change fairly rapidly. If I do jobs
that don't change the display much, the leak rate decreases.
From: David Holland <dholland-bugs@netbsd.org>
To: gnats-bugs@netbsd.org
Cc:
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Fri, 27 Mar 2020 00:13:56 +0000
On Tue, Mar 24, 2020 at 09:25:02PM +0000, Andrew Doran wrote:
>> kmem-00192 256 4088775 0 182 4088593 15 255538 0 255538 4096 255538 0 inf 0 0x10000 100.0%
>
> That's about a gigabyte of kernel memory leaked. I have a radeon card here.
> I'll try to reproduce it.
I have been seeing kmem-192 leaks on a machine with a radeon as well,
but nowhere as severe as described in this PR (takes weeks to burn
through a couple gigs).
FWIW.
--
David A. Holland
dholland@netbsd.org
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Thu, 26 Mar 2020 21:42:48 -0400 (EDT)
David Holland wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: David Holland <dholland-bugs@netbsd.org>
> To: gnats-bugs@netbsd.org
> Cc:
> Subject: Re: port-amd64/54988: possible memory leaks/swap problems
> Date: Fri, 27 Mar 2020 00:13:56 +0000
>
> On Tue, Mar 24, 2020 at 09:25:02PM +0000, Andrew Doran wrote:
> >> kmem-00192 256 4088775 0 182 4088593 15 255538 0 255538 4096 255538 0 inf 0 0x10000 100.0%
> >
> > That's about a gigabyte of kernel memory leaked. I have a radeon card here.
> > I'll try to reproduce it.
>
> I have been seeing kmem-192 leaks on a machine with a radeon as well,
> but nowhere as severe as described in this PR (takes weeks to burn
> through a couple gigs).
It can take mine as little as about four hours to burn through 4G.
The last time I checked, X didn't have to be running for it to
happen, but I haven't tried that in about a week.
[ 1.027936] radeon0 at pci1 dev 0 function 0: ATI Technologies Radeon HD 6450 (rev. 0x00)
[ 8.893648] radeon0: info: VRAM: 2048M 0x0000000000000000 - 0x000000007FFFFFFF (2048M used)
[ 8.893648] radeon0: info: GTT: 1024M 0x0000000080000000 - 0x00000000BFFFFFFF
[ 8.893648] kern info: [drm] radeon: 2048M of VRAM memory ready
[ 8.893648] kern info: [drm] radeon: 1024M of GTT memory ready.
[ 8.959977] kern info: [drm] radeon: dpm initialized
[ 9.009997] radeon0: info: WB enabled
[ 9.009997] radeon0: info: fence driver on ring 0 use gpu addr 0x0000000080000c00 and cpu addr 0x0xffffbfcb7bd1fc00
[ 9.009997] radeon0: info: fence driver on ring 3 use gpu addr 0x0000000080000c0c and cpu addr 0x0xffffbfcb7bd1fc0c
[ 9.020001] radeon0: info: fence driver on ring 5 use gpu addr 0x0000000000072118 and cpu addr 0x0xffffc300780b2118
[ 9.020001] radeon0: info: radeon: MSI limited to 32-bit
[ 9.020001] radeon0: info: radeon: using MSI.
[ 9.020001] radeon0: interrupting at msi3 vec 0 (radeon0)
[ 9.020001] kern info: [drm] radeon: irq initialized.
[ 10.030413] radeondrmkmsfb0 at radeon0
[ 10.030413] radeondrmkmsfb0: framebuffer at 0xffffc300786e3000, size 1920x1200, depth 32, stride 7680
[ 10.650666] wsdisplay0 at radeondrmkmsfb0 kbdmux 1: console (default, vt100 emulation), using wskbd0
[ 68.046323] kern error: [drm:(/usr/src/sys/external/bsd/drm2/dist/drm/radeon/radeon_btc_dpm.c:2319)btc_dpm_set_power_state] *ERROR* rv770_restrict_performance_levels_before_switch failed
From: Andrew Doran <ad@netbsd.org>
To: gnats-bugs@netbsd.org
Cc: gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sun, 5 Apr 2020 20:58:48 +0000
Tried it recently with a Radeon card, including with OpenGL (Quake II) but
no conclusive repro. Will try again in the near future.
Andrew
From: mlh@goathill.org (MLH)
To: Andrew Doran <ad@netbsd.org>
Cc: gnats-bugs@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Sun, 5 Apr 2020 19:17:26 -0400 (EDT)
Andrew Doran wrote:
> Tried it recently with a Radeon card, including with OpenGL (Quake II) but
> no conclusive repro. Will try again in the near future.
Thanks
Another thing I noticed is that it seems to lose memory the fastest
when text is scrolling in an xterm at high speed, such as building
sets, or building/installing pkgsrc binaries. If I hide the window
during those operations, the loss rate slows pretty dramatically.
From: Andrew Doran <ad@netbsd.org>
To: gnats-bugs@netbsd.org
Cc: gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 7 Apr 2020 22:12:21 +0000
On Sun, Apr 05, 2020 at 11:20:01PM +0000, MLH wrote:
> Another thing I noticed is that it seems to lose memory the fastest
> when text is scrolling in an xterm at high speed, such as building
> sets, or building/installing pkgsrc binaries. If I hide the window
> during those operations, the loss rate slows pretty dramatically.
I tried compiling some stuff with the output going to an xterm and sure
enough it starts to leak out of kmem-192:
$ vmstat -m | grep kmem-00192
kmem-00192 192 2911902 28 0 138662 0 138662 138662 0 inf 0
$ vmstat -C | grep kmem-00192
kmem-00192 182 15 0 39 2939175 3245745 9.4 35909883 91.0
It wasn't leaking before that, so it's a good repro. Looking with dtrace
there are many allocations happenning in the DRM code which is probably the
first place I'd look given that it's very X specific. Will need to think
about it some more.
Andrew
From: Jason Thorpe <thorpej@me.com>
To: Andrew Doran <ad@netbsd.org>
Cc: gnats-bugs@netbsd.org,
gnats-admin@netbsd.org,
netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Tue, 7 Apr 2020 15:20:09 -0700
> On Apr 7, 2020, at 3:12 PM, Andrew Doran <ad@netbsd.org> wrote:
>=20
> It wasn't leaking before that, so it's a good repro. Looking with =
dtrace
> there are many allocations happenning in the DRM code which is =
probably the
> first place I'd look given that it's very X specific. Will need to =
think
> about it some more.
Write a dtrace script that builds a stack trace histogram for any =
allocation from that bucket?
-- thorpej
From: mlh@goathill.org (MLH)
To: Jason Thorpe <thorpej@me.com>
Cc: Andrew Doran <ad@netbsd.org>, gnats-bugs@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org, mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Wed, 8 Apr 2020 13:35:31 -0400 (EDT)
Jason Thorpe wrote:
>
> > On Apr 7, 2020, at 3:12 PM, Andrew Doran <ad@netbsd.org> wrote:
> >
> > It wasn't leaking before that, so it's a good repro. Looking with dtrace
> > there are many allocations happenning in the DRM code which is probably the
> > first place I'd look given that it's very X specific. Will need to think
> > about it some more.
>
> Write a dtrace script that builds a stack trace histogram for any allocation from that bucket?
A bit more than I can figure out how to do. How do you even specify
to watch kmem-00192?
From: Izumi Tsutsui <tsutsui@ceres.dti.ne.jp>
To: ad@netbsd.org, thorpej@me.com
Cc: gnats-bugs@netbsd.org, tsutsui@ceres.dti.ne.jp
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Mon, 13 Apr 2020 01:10:19 +0900
thorpej@ wrote:
> > On Apr 7, 2020, at 3:12 PM, Andrew Doran <ad@netbsd.org> wrote:
> >
> > It wasn't leaking before that, so it's a good repro. Looking with dtrace
> > there are many allocations happenning in the DRM code which is probably the
> > first place I'd look given that it's very X specific. Will need to think
> > about it some more.
>
> Write a dtrace script that builds a stack trace histogram for any allocation from that bucket?
In a private discussion, I was told the following dtrace script
(for kmem-96 on NetBSD/i386 9.0 GENERIC):
---
dtrace -n 'fbt::kmem_intr_alloc:entry /80 < (arg0 + 0) && (arg0 + 0) <= 84/ { @["alloc", stack()] = count() } fbt::kmem_intr_free:entry /80 < (arg1 + 0) && (arg1 + 0) <= 84/ { @["free", stack()] = count() } tick-10s { printa(@) }'
---
On NetBSD/amd64 9.0 GENERIC, it looks kmem-160
(not kmem-192 in -current, probably some DIAGNOSTIC?):
---
dtrace -n 'fbt::kmem_intr_alloc:entry /128 < (arg0 + 0) && (arg0 + 0) <= 148/ { @["alloc", stack()] = count() } fbt::kmem_intr_free:entry /128 < (arg1 + 0) && (arg1 + 0) <= 148/ { @["free", stack()] = count() } tick-10s { printa(@) }'
---
Note 'modload dtrace_profile' is necessary for "tick-10s".
According to results of these dtrace, it looks the following kmalloc() in
src/sys/external/bsd/drm2/dist/drm/radeon/radeon_fence.c:radeon_fence_emit():
https://nxr.netbsd.org/xref/src/sys/external/bsd/drm2/dist/drm/radeon/radeon_fence.c?r=1.15#141
---
134 int radeon_fence_emit(struct radeon_device *rdev,
135 struct radeon_fence **fence,
136 int ring)
137 {
138 u64 seq = ++rdev->fence_drv[ring].sync_seq[ring];
139
140 /* we are protected by the ring emission mutex */
141 *fence = kmalloc(sizeof(struct radeon_fence), GFP_KERNEL);
142 if ((*fence) == NULL) {
143 return -ENOMEM;
144 }
---
I was also told the the fence memory would be free'ed from RCU GC thread
(fence_put() -> fence_release() -> fence_free() -> fence_free_cb() ?)
but it's too complicated for me to investegate.
---
Izumi Tsutsui
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, gnats-admin@netbsd.org, netbsd-bugs@netbsd.org,
mlh@goathill.org
Subject: Re: port-amd64/54988: possible memory leaks/swap problems
Date: Fri, 12 Jun 2020 09:19:55 -0400 (EDT)
Andrew Doran wrote:
> The following reply was made to PR port-amd64/54988; it has been noted by GNATS.
>
> From: Andrew Doran <ad@netbsd.org>
>
> On Sun, Apr 05, 2020 at 11:20:01PM +0000, MLH wrote:
>
> > Another thing I noticed is that it seems to lose memory the fastest
> > when text is scrolling in an xterm at high speed, such as building
> > sets, or building/installing pkgsrc binaries. If I hide the window
> > during those operations, the loss rate slows pretty dramatically.
>
> I tried compiling some stuff with the output going to an xterm and sure
> enough it starts to leak out of kmem-192:
I found another source of memory leakage this morning at 4am when
"daily" started up. I watched vmstat as memory went from ~70% used
(with essentially nothing running) to ~98% used and was never
recovered and had to reboot. I had started suspecting this as every
morning I usually have to reboot the box to make it usable.
This with NetBSD 9.99.64 Fri Jun 5
State-Changed-From-To: open->feedback
State-Changed-By: tsutsui@NetBSD.org
State-Changed-When: Thu, 12 Aug 2021 23:12:46 +0000
State-Changed-Why:
Maybe fixed by src/sys/external/bsd/drm2/linux/linux_reservation.c
rev 1.12, 1.13, and 1.14, and have been pulled up to netbsd-9.
https://mail-index.netbsd.org/source-changes/2021/06/27/msg130457.html
https://mail-index.netbsd.org/source-changes/2021/08/02/msg131262.html
https://mail-index.netbsd.org/source-changes/2021/08/02/msg131266.html
Could you confirm?
From: mlh@goathill.org (MLH)
To: gnats-bugs@netbsd.org
Cc: ad@netbsd.org, netbsd-bugs@netbsd.org, gnats-admin@netbsd.org,
tsutsui@NetBSD.org, mlh@goathill.org
Subject: Re: port-amd64/54988 (system freezes shortly after physical memory is
exhausted.)
Date: Sat, 14 Aug 2021 11:39:22 -0400 (EDT)
tsutsui@NetBSD.org wrote:
> Synopsis: system freezes shortly after physical memory is exhausted.
>
> State-Changed-From-To: open->feedback
> State-Changed-By: tsutsui@NetBSD.org
> State-Changed-When: Thu, 12 Aug 2021 23:12:46 +0000
> State-Changed-Why:
> Maybe fixed by src/sys/external/bsd/drm2/linux/linux_reservation.c
> rev 1.12, 1.13, and 1.14, and have been pulled up to netbsd-9.
> https://mail-index.netbsd.org/source-changes/2021/06/27/msg130457.html
> https://mail-index.netbsd.org/source-changes/2021/08/02/msg131262.html
> https://mail-index.netbsd.org/source-changes/2021/08/02/msg131266.html
> Could you confirm?
This appears to have fixed the issue. It was still a problem with
the last kernel I ran at the end of May 2021. The machine would
exhaust 4GB of physical memory within a maximum of three days, even
if just sitting and often within just a few hours when under load.
I ran three HD videos looping all night long with many applications
that are graphics intensive along with rebuilding pkgsrc while
doing an rsync backup to another drive. It never exceeded about
70% of physical memory and released all but about 34% when all was
shut down. The only slowdown with graphics was when all remaining
phymem was used as disc cache and some had to be recovered before
gfx ops could resume at full speed.
This is actually better behavior and performance than I have ever
seen on this box which is about 9 yrs old...
Thank you!
[ 1.000000] NetBSD 9.99.88 (HDMIAUDIO) #0: Fri Aug 13 13:46:34 EDT 2021
[ 1.000000] ...
[ 1.000000] total memory = 4079 MB
[ 1.000000] avail memory = 3925 MB
[ 1.000000] timecounter: Timecounters tick every 10.000 msec
[ 1.000000] Kernelized RAIDframe activated
[ 1.000000] timecounter: Timecounter "i8254" frequency 1193182 Hz quality 100
[ 1.000003] mainbus0 (root)
[ 1.000003] ACPI: RSDP 0x00000000000F6EA0 000014 (v00 GBT )
[ 1.000003] ACPI: RSDT 0x00000000DF7D3040 00004C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: FACP 0x00000000DF7D3100 000074 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: DSDT 0x00000000DF7D31C0 0049F2 (v01 GBT GBTUACPI 00001000 MSFT 04000000)
[ 1.000003] ACPI: FACS 0x00000000DF7D0000 000040
[ 1.000003] ACPI: MSDM 0x00000000DF7D7D00 000055 (v03 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: HPET 0x00000000DF7D7DC0 000038 (v01 GBT GBTUACPI 42302E31 GBTU 00000098)
[ 1.000003] ACPI: MCFG 0x00000000DF7D7E40 00003C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: ASPT 0x00000000DF7D7F00 000034 (v07 GBT PerfTune 312E3042 UTBG 01010101)
[ 1.000003] ACPI: SSPT 0x00000000DF7D7F40 002270 (v01 GBT SsptHead 312E3042 UTBG 01010101)
[ 1.000003] ACPI: EUDS 0x00000000DF7DA1B0 0000C0 (v01 GBT 00000000 00000000)
[ 1.000003] ACPI: TAMG 0x00000000DF7DA270 000382 (v01 GBT GBT B0 5455312E BG?? 45240101)
[ 1.000003] ACPI: APIC 0x00000000DF7D7C00 0000BC (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
[ 1.000003] ACPI: SSDT 0x00000000DF7DA600 001EC8 (v01 INTEL PPM RCM 80000001 INTL 20061109)
[ 1.000003] ACPI: 2 ACPI AML tables successfully acquired and loaded
[ 1.000003] ioapic0 at mainbus0 apid 2: pa 0xfec00000, version 0x20, 24 pins
[ 1.000003] cpu0 at mainbus0 apid 0
[ 1.000003] cpu0: Use lfence to serialize rdtsc
[ 1.000003] cpu0: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu0: node 0, package 0, core 0, smt 0
[ 1.000003] cpu1 at mainbus0 apid 2
[ 1.000003] cpu1: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu1: node 0, package 0, core 1, smt 0
[ 1.000003] cpu2 at mainbus0 apid 1
[ 1.000003] cpu2: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu2: node 0, package 0, core 0, smt 1
[ 1.000003] cpu3 at mainbus0 apid 3
[ 1.000003] cpu3: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, id 0x206a7
[ 1.000003] cpu3: node 0, package 0, core 1, smt 1
[ 1.000003] acpi0 at mainbus0: Intel ACPICA 20210604
[ 1.000003] acpi0: X/RSDT: OemId <GBT ,GBTUACPI,42302e31>, AslId <GBTU,01010101>
[ 1.000003] acpi0: MCFG: segment 0, bus 0-63, address 0x00000000f4000000
[ 1.000003] acpi0: SCI interrupting at int 9
[ 1.000003] acpi0: fixed power button present
[ 1.000003] timecounter: Timecounter "ACPI-Fast" frequency 3579545 Hz quality 1000
[ 1.025573] hpet0 at acpi0: high precision event timer (mem 0xfed00000-0xfed00400)
[ 1.025573] timecounter: Timecounter "hpet0" frequency 14318180 Hz quality 2000
[ 1.025834] acpibut0 at acpi0 (PWRB, PNP0C0C): ACPI Power Button
[ 1.025834] attimer1 at acpi0 (TMR, PNP0100): io 0x40-0x43
[ 1.025834] pcppi1 at acpi0 (SPKR, PNP0800): io 0x61
[ 1.025834] spkr0 at pcppi1: PC Speaker
[ 1.025834] wsbell at spkr0 not configured
[ 1.025834] midi0 at pcppi1: PC speaker
[ 1.025834] sysbeep0 at pcppi1
[ 1.025834] UAR1 (PNP0501) at acpi0 not configured
[ 1.025834] LPT1 (PNP0400) at acpi0 not configured
[ 1.025834] MEM (PNP0C01) at acpi0 not configured
[ 1.025834] FWH (INT0800) at acpi0 not configured
[ 1.025834] ACPI: Enabled 1 GPEs in block 00 to 3F
[ 1.025834] attimer1: attached to pcppi1
[ 1.025834] pci0 at mainbus0 bus 0: configuration mode 1
[ 1.025834] pci0: i/o space, memory space enabled, rd/line, rd/mult, wr/inv ok
[ 1.025834] pchb0 at pci0 dev 0 function 0: Intel Sandy Bridge (desktop) Host Bridge (rev. 0x09)
[ 1.025834] ppb0 at pci0 dev 1 function 0: Intel Sandy Bridge (desktop) PCIe Root port (rev. 0x09)
[ 1.025834] ppb0: PCI Express capability version 2 <Root Port of PCI-E Root Complex> x16 @ 5.0GT/s
[ 1.025834] pci1 at ppb0 bus 1
[ 1.025834] pci1: i/o space, memory space enabled, rd/line, wr/inv ok
[ 1.025834] radeon0 at pci1 dev 0 function 0: ATI Technologies Radeon HD 6450 (rev. 0x00)
[ 1.025834] hdaudio0 at pci1 dev 0 function 1: HD Audio Controller
[ 1.025834] hdaudio0: interrupting at msi0 vec 0
[ 1.025834] hdaudio0: HDA ver. 1.0, OSS 1, ISS 0, BSS 0, SDO 1, 64-bit
[ 1.025834] hdafg0 at hdaudio0: vendor 1002 product aa01
[ 1.025834] hdafg0: HDMI00 2ch: Digital Out [Jack]
[ 1.025834] hdafg0: 2ch/0ch 32000Hz 44100Hz 48000Hz PCM16 AC3
...
State-Changed-From-To: feedback->closed
State-Changed-By: mrg@NetBSD.org
State-Changed-When: Sat, 14 Aug 2021 20:32:45 +0000
State-Changed-Why:
reported fix by submitter (and others). pullups are done, thanks!
>Unformatted:
(Contact us)
$NetBSD: query-full-pr,v 1.46 2020/01/03 16:35:01 leot Exp $
$NetBSD: gnats_config.sh,v 1.9 2014/08/02 14:16:04 spz Exp $
Copyright © 1994-2020
The NetBSD Foundation, Inc. ALL RIGHTS RESERVED.