NetBSD Problem Report #53591
From gson@gson.org Tue Sep 11 08:46:23 2018
Return-Path: <gson@gson.org>
Received: from mail.netbsd.org (mail.netbsd.org [199.233.217.200])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(Client CN "mail.NetBSD.org", Issuer "mail.NetBSD.org CA" (not verified))
by mollari.NetBSD.org (Postfix) with ESMTPS id 773407A151
for <gnats-bugs@gnats.NetBSD.org>; Tue, 11 Sep 2018 08:46:23 +0000 (UTC)
Message-Id: <20180911084619.B2B20989770@guava.gson.org>
Date: Tue, 11 Sep 2018 11:46:19 +0300 (EEST)
From: gson@gson.org (Andreas Gustafsson)
Reply-To: gson@gson.org (Andreas Gustafsson)
To: gnats-bugs@NetBSD.org
Subject: [system] process uses >400% CPU on idle machine
X-Send-Pr-Version: 3.95
>Number: 53591
>Category: kern
>Synopsis: [system] process uses >400% CPU on idle machine
>Confidential: no
>Severity: serious
>Priority: high
>Responsible: kern-bug-people
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Tue Sep 11 08:50:00 +0000 2018
>Last-Modified: Tue Sep 11 10:20:00 +0000 2018
>Originator: Andreas Gustafsson
>Release: NetBSD 8.0
>Organization:
>Environment:
System: NetBSD guido
Architecture: x86_64
Machine: amd64
>Description:
My 12-core HP DL360 G7 system running NetBSD/amd64 8.0 has now somehow
gotten itself into a state where the [system] process is using >400%
CPU even though the system is idle. "top" shows:
load averages: 0.00, 0.00, 0.80; up 1+18:48:30
51 processes: 45 sleeping, 4 stopped, 2 on CPU
CPU states: 0.0% user, 0.0% nice, 34.8% system, 0.0% interrupt, 65.1% idle
Memory: 20G Act, 10G Inact, 348K Wired, 33M Exec, 4875M File, 62M Free
Swap:
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
0 root 0 0 0K 133M CPU/11 507:36 0.00% 353% [system]
484 pgsql 85 0 77M 4572K select/7 2:45 0.00% 0.00% postgres
6099 gson 85 0 95M 3020K select/6 0:58 0.00% 0.00% sshd
Pressing the "t" key shows that the kernel threads eating CPU are
the pgdaemon and xcall threads:
load averages: 0.00, 0.00, 0.76; up 1+18:49:12
217 threads: 49 idle, 1 runnable, 146 sleeping, 8 stopped, 1 zombie, 12 on CPU
CPU states: 0.0% user, 0.0% nice, 35.8% system, 0.0% interrupt, 64.1% idle
Memory: 20G Act, 10G Inact, 348K Wired, 33M Exec, 4875M File, 62M Free
Swap:
PID LID USERNAME PRI STATE TIME WCPU CPU NAME COMMAND
0 7 root 127 xcall/0 43:21 61.96% 61.96% xcall/0 [system]
0 22 root 127 xcall/1 42:08 47.22% 47.22% xcall/1 [system]
0 28 root 127 xcall/2 39:35 42.97% 42.97% xcall/2 [system]
0 34 root 127 RUN/3 34:54 31.59% 31.59% xcall/3 [system]
0 52 root 127 xcall/6 29:36 30.96% 30.96% xcall/6 [system]
0 58 root 127 xcall/7 28:53 29.88% 29.88% xcall/7 [system]
0 70 root 127 xcall/9 26:41 29.69% 29.69% xcall/9 [system]
0 64 root 127 xcall/8 26:46 29.49% 29.49% xcall/8 [system]
0 156 root 126 xclocv/1 92:15 29.44% 29.44% pgdaemon [system]
0 82 root 127 xcall/11 24:05 28.47% 28.47% xcall/11 [system]
0 46 root 127 xcall/5 31:20 28.12% 28.12% xcall/5 [system]
0 40 root 127 xcall/4 30:48 25.29% 25.29% xcall/4 [system]
0 76 root 127 xcall/10 24:03 25.05% 25.05% xcall/10 [system]
0 157 root 124 syncer/4 22:45 0.00% 0.00% ioflush [system]
0 158 root 125 aiodon/9 5:12 0.00% 0.00% aiodoned [system]
0 84 root 96 ipmicm/1 5:04 0.00% 0.00% ipmi [system]
484 1 pgsql 85 select/2 2:45 0.00% 0.00% - postgres
0 9 root 125 vdrain/1 1:17 0.00% 0.00% vdrain [system]
0 159 root 123 physio/0 1:12 0.00% 0.00% physiod [system]
Output from "vmstat 1":
procs memory page disks faults cpu
r b avm fre flt re pi po fr sr l0 s0 in sy cs us sy id
1 8 21024468 74920 15313 1 0 0 191 532 79 44 170 11879 38629 3 3 93
0 8 21024468 74920 1 0 0 0 0 0 0 0 8 121 960529 0 36 64
0 8 21024468 74668 613 0 0 0 0 0 0 3 27 316 951463 0 37 63
0 8 21024468 74672 0 0 0 0 0 0 0 0 3 25 958574 0 37 63
0 8 21024468 74672 0 0 0 0 0 0 0 0 2 28 962733 0 35 65
0 8 21024468 74940 0 0 0 0 0 0 0 0 2 25 957158 0 36 64
0 8 21024468 74940 0 0 0 0 0 0 0 0 4 106 953688 0 37 63
I will try to avoid rebooting for 24 hours in case someone wants me to
run other diagnostics.
>How-To-Repeat:
Don't know, this has only happened once so far. I had been using dtrace,
so maybe that's what triggered it. Or not.
>Fix:
>Audit-Trail:
From: coypu@sdf.org
To: gnats-bugs@NetBSD.org
Cc:
Subject: Re: kern/53591: [system] process uses >400% CPU on idle machine
Date: Tue, 11 Sep 2018 08:53:32 +0000
Could this be related?
(I don't know how to check)
https://v4.freshbsd.org/commit/netbsd/src/pKELGaMpgIjnUoLA
From: Andreas Gustafsson <gson@gson.org>
To: coypu@sdf.org
Cc: gnats-bugs@NetBSD.org
Subject: Re: kern/53591: [system] process uses >400% CPU on idle machine
Date: Tue, 11 Sep 2018 12:13:21 +0300
coypu@sdf.org wrote:
> Could this be related?
> (I don't know how to check)
>
> https://v4.freshbsd.org/commit/netbsd/src/pKELGaMpgIjnUoLA
I don't think so. I ran a kernel profile using dtrace:
dtrace -x stackframes=100 -n 'profile-99 /arg0/ { @[stack()] = count(); }'
and pserialize did not show up at all. Most of the time was spent in
the idle loop and xc_thread, and then there was this stack trace
which: looks like it could be the source of the xcalls:
netbsd`xc_wait+0x3a
netbsd`pool_cache_invalidate+0xd7
netbsd`pool_reclaim+0x58
netbsd`pool_drain+0x60
netbsd`uvm_pageout+0x4bf
netbsd`lwp_trampoline+0x17
--
Andreas Gustafsson, gson@gson.org
From: Lars Reichardt <lars@paradoxon.info>
To: gnats-bugs@NetBSD.org, kern-bug-people@netbsd.org,
gnats-admin@netbsd.org, netbsd-bugs@netbsd.org
Cc:
Subject: Re: kern/53591: [system] process uses >400% CPU on idle machine
Date: Tue, 11 Sep 2018 12:09:35 +0200
On 9/11/18 10:50 AM, Andreas Gustafsson wrote:
>> Number: 53591
>> Category: kern
>> Synopsis: [system] process uses >400% CPU on idle machine
>> Confidential: no
>> Severity: serious
>> Priority: high
>> Responsible: kern-bug-people
>> State: open
>> Class: sw-bug
>> Submitter-Id: net
>> Arrival-Date: Tue Sep 11 08:50:00 +0000 2018
>> Originator: Andreas Gustafsson
>> Release: NetBSD 8.0
>> Organization:
>> Environment:
> System: NetBSD guido
> Architecture: x86_64
> Machine: amd64
>> Description:
> My 12-core HP DL360 G7 system running NetBSD/amd64 8.0 has now somehow
> gotten itself into a state where the [system] process is using >400%
> CPU even though the system is idle. "top" shows:
>
> load averages: 0.00, 0.00, 0.80; up 1+18:48:30
> 51 processes: 45 sleeping, 4 stopped, 2 on CPU
> CPU states: 0.0% user, 0.0% nice, 34.8% system, 0.0% interrupt, 65.1% idle
> Memory: 20G Act, 10G Inact, 348K Wired, 33M Exec, 4875M File, 62M Free
> Swap:
>
> PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
> 0 root 0 0 0K 133M CPU/11 507:36 0.00% 353% [system]
> 484 pgsql 85 0 77M 4572K select/7 2:45 0.00% 0.00% postgres
> 6099 gson 85 0 95M 3020K select/6 0:58 0.00% 0.00% sshd
>
> Pressing the "t" key shows that the kernel threads eating CPU are
> the pgdaemon and xcall threads:
>
> load averages: 0.00, 0.00, 0.76; up 1+18:49:12
> 217 threads: 49 idle, 1 runnable, 146 sleeping, 8 stopped, 1 zombie, 12 on CPU
> CPU states: 0.0% user, 0.0% nice, 35.8% system, 0.0% interrupt, 64.1% idle
> Memory: 20G Act, 10G Inact, 348K Wired, 33M Exec, 4875M File, 62M Free
> Swap:
>
> PID LID USERNAME PRI STATE TIME WCPU CPU NAME COMMAND
> 0 7 root 127 xcall/0 43:21 61.96% 61.96% xcall/0 [system]
> 0 22 root 127 xcall/1 42:08 47.22% 47.22% xcall/1 [system]
> 0 28 root 127 xcall/2 39:35 42.97% 42.97% xcall/2 [system]
> 0 34 root 127 RUN/3 34:54 31.59% 31.59% xcall/3 [system]
> 0 52 root 127 xcall/6 29:36 30.96% 30.96% xcall/6 [system]
> 0 58 root 127 xcall/7 28:53 29.88% 29.88% xcall/7 [system]
> 0 70 root 127 xcall/9 26:41 29.69% 29.69% xcall/9 [system]
> 0 64 root 127 xcall/8 26:46 29.49% 29.49% xcall/8 [system]
> 0 156 root 126 xclocv/1 92:15 29.44% 29.44% pgdaemon [system]
> 0 82 root 127 xcall/11 24:05 28.47% 28.47% xcall/11 [system]
> 0 46 root 127 xcall/5 31:20 28.12% 28.12% xcall/5 [system]
> 0 40 root 127 xcall/4 30:48 25.29% 25.29% xcall/4 [system]
> 0 76 root 127 xcall/10 24:03 25.05% 25.05% xcall/10 [system]
> 0 157 root 124 syncer/4 22:45 0.00% 0.00% ioflush [system]
> 0 158 root 125 aiodon/9 5:12 0.00% 0.00% aiodoned [system]
> 0 84 root 96 ipmicm/1 5:04 0.00% 0.00% ipmi [system]
> 484 1 pgsql 85 select/2 2:45 0.00% 0.00% - postgres
> 0 9 root 125 vdrain/1 1:17 0.00% 0.00% vdrain [system]
> 0 159 root 123 physio/0 1:12 0.00% 0.00% physiod [system]
>
> Output from "vmstat 1":
>
> procs memory page disks faults cpu
> r b avm fre flt re pi po fr sr l0 s0 in sy cs us sy id
> 1 8 21024468 74920 15313 1 0 0 191 532 79 44 170 11879 38629 3 3 93
> 0 8 21024468 74920 1 0 0 0 0 0 0 0 8 121 960529 0 36 64
> 0 8 21024468 74668 613 0 0 0 0 0 0 3 27 316 951463 0 37 63
> 0 8 21024468 74672 0 0 0 0 0 0 0 0 3 25 958574 0 37 63
> 0 8 21024468 74672 0 0 0 0 0 0 0 0 2 28 962733 0 35 65
> 0 8 21024468 74940 0 0 0 0 0 0 0 0 2 25 957158 0 36 64
> 0 8 21024468 74940 0 0 0 0 0 0 0 0 4 106 953688 0 37 63
>
> I will try to avoid rebooting for 24 hours in case someone wants me to
> run other diagnostics.
>
>> How-To-Repeat:
> Don't know, this has only happened once so far. I had been using dtrace,
> so maybe that's what triggered it. Or not.
>
>> Fix:
How much memory does the machine have, maybe some pools (with larger the
PAGE_SIZE allocators) have eaten all the kmem_va space?
What does vmstat -mv show?
From: Andreas Gustafsson <gson@gson.org>
To: Lars Reichardt <lars@paradoxon.info>
Cc: gnats-bugs@NetBSD.org
Subject: Re: kern/53591: [system] process uses >400% CPU on idle machine
Date: Tue, 11 Sep 2018 13:16:57 +0300
Lars Reichardt wrote:
> How much memory does the machine have, maybe some pools (with larger the
> PAGE_SIZE allocators) have eaten all the kmem_va space?
48 GB.
> What does vmstat -mv show?
Memory resource pool statistics
Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
ah_tdb_crypto 192 0 0 0 0 0 0 0 0 inf 0
aio_jobs_pool 128 0 0 0 0 0 0 0 0 inf 0
aio_lio_pool 40 0 0 0 0 0 0 0 0 inf 0
amappl 80 7319150 0 7317532 270 195 75 125 0 inf 0
anonpl 32 214181082 0 214144083 17460 15082 2378 5338 0 inf 0
biopl 288 731304 0 731304 5856 5855 1 1569 0 inf 1
bnxpkts 40 84 0 0 1 0 1 1 0 inf 0
brtpl 56 0 0 0 0 0 0 0 0 inf 0
buf16k 16384 5635 0 5402 771 695 76 316 1 1 0
buf1k 1024 4 0 4 2 1 1 2 1 1 1
buf2k 2048 41 0 41 17 16 1 14 1 1 1
buf32k 32768 289470 0 273888 88709 80757 7952 32673 1 1 0
buf4k 4096 2740424 0 2630918 2740425 2630918 109507 904489 1 1 1
buf512b 512 0 0 0 1 0 1 1 1 1 1
buf64k 65536 0 0 0 1 0 1 1 1 1 1
buf8k 8192 11392 0 10999 609 538 71 322 1 1 0
bufpl 288 1557354 0 1431640 99714 83338 16376 69416 0 inf 0
ccdbuf 320 0 0 0 0 0 0 0 0 inf 0
cd9660nopl 208 0 0 0 0 0 0 0 0 inf 0
cryptdesc 128 0 0 0 0 0 0 0 0 inf 0
cryptkop 384 0 0 0 0 0 0 0 0 inf 0
cryptop 320 0 0 0 0 0 0 0 0 inf 0
csepl 208 0 0 0 17 0 17 17 17 inf 17
cwdi 64 484503 0 484455 18 15 3 7 0 inf 0
cyclic_id_cache 64 8 0 8 8 8 0 1 0 inf 0
dbregs 128 0 0 0 0 0 0 0 0 inf 0
dirhepl 40 0 0 0 0 0 0 0 0 260 0
dirhpl 296 0 0 0 0 0 0 0 0 inf 0
dtrace_state_ca 768 27 0 27 14 14 0 4 0 inf 0
ehcixfer 400 2 0 1 1 0 1 1 0 inf 0
esp_tdb_crypto 128 0 0 0 0 0 0 0 0 inf 0
execargs 262144 3587867 0 3587867 323 321 2 8 0 16 2
ext2fsinopl 256 0 0 0 0 0 0 0 0 inf 0
extent 40 16 0 13 1 0 1 1 0 inf 0
fcrpl 168 0 0 0 3 0 3 3 3 inf 3
fdfile 64 1624770 0 1623303 108 78 30 60 0 inf 0
ffsdino1 128 5434084 0 4474127 107678 76711 30967 60700 0 inf 0
ffsdino2 256 5276044 0 3965298 134881 21574 113307 134881 0 inf 0
ffsino 256 5873299 0 3602596 158933 1711 157222 157260 0 inf 0
file 128 3647985 0 3647721 111 98 13 26 0 inf 0
filedesc 832 486678 0 486630 747 728 19 107 0 inf 0
icmp 24 30 0 30 2 2 0 1 0 inf 0
icmp6 24 7 0 7 4 4 0 1 0 inf 0
igmppl 32 0 0 0 0 0 0 0 0 inf 0
in6pcbpl 272 166 0 157 1 0 1 1 0 inf 0
inmltpl 48 7 0 4 1 0 1 1 0 inf 0
inpcbpl 232 472 0 458 4 3 1 2 0 inf 0
ipcomp_tdb_cryp 64 0 0 0 0 0 0 0 0 inf 0
ipfrenpl 64 6 0 6 1 1 0 1 0 inf 0
kcpuset 64 936708 0 936554 27 21 6 11 0 inf 0
kcredpl 192 363699 0 363502 14 1 13 14 0 inf 0
kmem-1024 1024 4468641 1 4468172 13475 13292 183 1406 0 inf 0
kmem-112 112 1405181 0 1404810 62 46 16 25 0 inf 0
kmem-128 128 3098635 0 3038394 2111 216 1895 1936 0 inf 0
kmem-16 16 3938994 0 3910726 128 16 112 116 0 inf 0
kmem-160 160 2054488 0 1521791 21542 223 21319 21329 0 inf 0
kmem-192 192 710307 0 707999 145 30 115 124 0 inf 0
kmem-2048 2048 3520023 1 2987733 278664 12322 266342 267143 0 inf 0
kmem-224 224 283498 0 281379 172 51 121 132 0 inf 0
kmem-24 24 4467393 0 4412427 326 1 325 326 0 inf 0
kmem-256 256 1471650 0 1471328 371 342 29 72 0 inf 0
kmem-32 32 2774458 0 2160461 4867 69 4798 4802 0 inf 0
kmem-320 320 878079 0 877480 162 109 53 72 0 inf 0
kmem-384 384 1396633 0 1387267 1021 78 943 961 0 inf 0
kmem-40 40 1564809 0 1556717 274 193 81 83 0 inf 0
kmem-4096 4096 145428 1 84679 62280 1530 60750 60757 0 inf 1
kmem-448 448 1930179 0 1930118 252 242 10 41 0 inf 0
kmem-48 48 1567116 0 1566121 17 3 14 15 0 inf 0
kmem-512 512 860435 0 860296 295 275 20 53 0 inf 0
kmem-56 56 3069616 0 3068756 25 1 24 24 0 inf 0
kmem-64 64 4161045 0 4154041 165 24 141 162 0 inf 0
kmem-768 768 2330036 0 2329707 1672 1595 77 894 0 inf 0
kmem-8 8 7846087 0 7726025 246 10 236 238 0 inf 0
kmem-80 80 2234884 0 2233730 50 20 30 35 0 inf 0
kmem-96 96 970741 0 880322 2162 5 2157 2161 0 inf 0
ksiginfo 72 157544 0 157539 15 12 3 5 0 inf 0
ktrace 120 0 0 0 0 0 0 0 0 inf 0
kva-12288 12288 2631 0 14 125 0 125 125 0 inf 0
kva-16384 16384 230 0 54 14 3 11 11 0 inf 0
kva-20480 20480 24 0 13 2 1 1 2 0 inf 0
kva-24576 24576 14 0 2 2 0 2 2 0 inf 0
kva-28672 28672 3 0 0 1 0 1 1 0 inf 0
kva-32768 32768 967 0 963 10 9 1 5 0 inf 0
kva-36864 36864 2 0 0 1 0 1 1 0 inf 0
kva-4096 4096 0 0 0 0 0 0 0 0 inf 0
kva-40960 40960 0 0 0 0 0 0 0 0 inf 0
kva-49152 49152 0 0 0 0 0 0 0 0 inf 0
kva-65536 65536 46 0 43 10 9 1 8 0 inf 0
kva-8192 8192 60 0 35 3 2 1 2 0 inf 0
l2cap_pdu 48 0 0 0 0 0 0 0 0 inf 0
l2cap_req 120 0 0 0 0 0 0 0 0 inf 0
lfsdinopl 256 0 0 0 0 0 0 0 0 inf 0
lfsinoextpl 192 0 0 0 0 0 0 0 0 inf 0
lfsinopl 224 0 0 0 0 0 0 0 0 inf 0
lfslbnpool 24 0 0 0 0 0 0 0 0 inf 0
llentrypl 272 61 0 59 1 0 1 1 0 inf 0
lockf 112 90983 0 90975 10 9 1 3 0 inf 0
lwppl 1056 397550 0 397336 1120 1043 77 180 0 inf 0
mbpl 512 1623617 0 1622592 234 76 158 220 2 inf 0
mclpl 2048 343519 0 342499 4815 4299 516 663 4 786269 4
mqmsgpl 1024 0 0 0 0 0 0 0 0 inf 0
msdosfhpl 48 0 0 0 0 0 0 0 0 inf 0
msdosnopl 208 0 0 0 0 0 0 0 0 inf 0
mutex 64 13353830 0 10549118 50575 2161 48414 48414 0 inf 0
ncache 192 10218095 0 7820062 120481 675 119806 119818 0 inf 0
nfsnodepl 280 0 0 0 0 0 0 0 0 inf 0
nfsreqcachepl 96 0 0 0 0 0 0 0 0 inf 0
nfsrvdescpl 248 0 0 0 0 0 0 0 0 inf 0
nfsvapl 176 0 0 0 0 0 0 0 0 inf 0
pcache 2688 96 0 4 92 0 92 92 0 inf 0
pcachecpu 64 1100 0 0 18 0 18 18 0 inf 0
pcglarge 1024 11893610 0 11893608 28975 28967 8 2948 0 inf 6
pcgnormal 256 50296857 0 50296842 55982 55978 4 20659 0 inf 3
pdict128 184 0 0 0 0 0 0 0 0 inf 0
pdict16 72 348 0 298 1 0 1 1 0 inf 0
pdict32 88 16 0 6 1 0 1 1 0 inf 0
pdppl 4096 466320 0 466272 4627 4579 48 412 0 inf 0
pewpl 24 0 0 0 1 0 1 1 1 1 1
phpool-0 56 3606920 1 2863817 22574 8145 14429 21390 0 inf 0
phpool-1024 176 0 0 0 0 0 0 0 0 inf 0
phpool-128 64 5183 0 266 79 0 79 79 0 inf 0
phpool-2048 304 0 0 0 0 0 0 0 0 inf 0
phpool-256 80 454 0 17 9 0 9 9 0 inf 0
phpool-4096 560 0 0 0 0 0 0 0 0 inf 0
phpool-512 112 246 0 10 7 0 7 7 0 inf 0
phpool-64 56 5053 0 4693 11 2 9 9 0 inf 0
piperd 320 115313 0 115283 125 120 5 32 0 inf 0
pipewr 320 127117 0 127087 167 163 4 31 0 inf 0
plimitpl 232 181138 0 181105 65 62 3 17 0 inf 0
pmappl 408 468146 0 468098 140 128 12 42 0 inf 0
pnbufpl 1024 9627707 0 9627702 2387 2383 4 64 0 inf 2
procpl 720 392585 0 392537 583 570 13 78 0 inf 0
proparay 48 186 0 0 3 0 3 3 0 inf 0
propdata 40 0 0 0 0 0 0 0 0 inf 0
propdict 48 671 0 154 7 0 7 7 0 inf 0
propnmbr 56 46 0 0 1 0 1 1 0 inf 0
propstng 40 1243 0 304 10 0 10 10 0 inf 0
pstatspl 448 393014 0 392966 255 244 11 44 0 inf 0
ptimerpl 264 313 0 300 1 0 1 1 0 inf 0
ptimerspl 304 313 0 300 6 4 2 2 0 inf 0
puffpnpl 240 0 0 0 0 0 0 0 0 inf 0
puffprkl 112 0 0 0 0 0 0 0 0 inf 0
puffvapl 176 0 0 0 0 0 0 0 0 inf 0
pvpl 40 256408263 0 256383829 6214 5588 626 1471 0 inf 0
ractx 32 1329893 0 610844 9656 3949 5707 5740 0 inf 0
rfcomm_credit 24 0 0 0 0 0 0 0 0 inf 0
rndctx 16 83842 0 83842 39 39 0 1 0 inf 0
rndsample 536 16754 0 16708 30 22 8 20 0 586 0
rndtemp 512 83791 0 83791 52 52 0 2 0 inf 0
rtentpl 320 42 0 10 3 0 3 3 0 inf 0
rttmrpl 64 0 0 0 0 0 0 0 0 inf 0
rwlock 64 5 0 0 1 0 1 1 0 inf 0
sackholepl 32 331 0 331 37 37 0 1 0 inf 0
scxspl 256 11951915 0 11951915 598 597 1 17 1 inf 1
sigacts 3088 483796 0 483748 4657 4609 48 422 0 inf 0
smbfsnopl 176 0 0 0 0 0 0 0 0 inf 0
smbrqpl 288 0 0 0 0 0 0 0 0 inf 0
smbt2pl 224 0 0 0 0 0 0 0 0 inf 0
socket 592 732 0 607 47 23 24 35 0 inf 0
swp vnd 296 0 0 0 0 0 0 0 0 inf 0
swp vnx 32 0 0 0 0 0 0 0 0 inf 0
synpl 312 62 0 62 10 10 0 1 0 inf 0
taskq_cache 352 1 0 0 1 0 1 1 0 inf 0
taskq_ent_cache 72 4 0 0 1 0 1 1 0 inf 0
tcpcbpl 832 350 0 332 13 8 5 6 0 inf 0
tcpipqepl 80 870 0 870 10 10 0 1 0 inf 0
tmpfs_dirent 48 3459274 0 2849244 7991 728 7263 7264 0 inf 0
tmpfs_node 216 3421346 0 2824086 36405 3222 33183 33189 0 inf 0
tstilepl 96 394520 0 394306 27 18 9 13 0 inf 0
uaoeltpl 96 0 0 0 0 0 0 0 0 inf 0
uarea 16384 395479 0 395265 4562 4348 214 538 0 inf 0
ufsdir 264 319217 0 319217 507 506 1 10 0 inf 1
ufsdq 80 0 0 0 0 0 0 0 0 inf 0
uhcixfer 400 5 0 3 1 0 1 1 0 inf 0
uhcixfer 400 3 0 2 1 0 1 1 0 inf 0
uhcixfer 400 3 0 2 1 0 1 1 0 inf 0
uhcixfer 400 3 0 2 1 0 1 1 0 inf 0
uhcixfer 400 3 0 2 1 0 1 1 0 inf 0
vcachepl 336 5088145 0 2817408 210363 689 209674 209683 0 inf 0
vmembt 56 433942 0 163789 3762 0 3762 3762 0 inf 0
vmmpepl 144 15712111 0 15704209 1301 887 414 601 0 inf 0
vmsppl 368 489079 0 489031 148 138 10 40 0 inf 0
wapbldealloc 32 0 0 0 0 0 0 0 0 inf 0
wapblentrypl 40 0 0 0 0 0 0 0 0 inf 0
Totals 715547626 4698125482432720330845021242701
In use 5043006K, total allocated 5460760K; utilization 92.3%
--
Andreas Gustafsson, gson@gson.org
(Contact us)
$NetBSD: query-full-pr,v 1.43 2018/01/16 07:36:43 maya Exp $
$NetBSD: gnats_config.sh,v 1.9 2014/08/02 14:16:04 spz Exp $
Copyright © 1994-2017
The NetBSD Foundation, Inc. ALL RIGHTS RESERVED.