kernel: BUG: soft lockup _raw_spin_unlock_irqrestore











up vote
0
down vote

favorite












System reboot after hung, the soft lockup message keeps come out. since the vmcore not enable before reboot, no vmcore. kernel: 3.10.0-327.el7.x86_64.



If someone has the similar problems before, do you know what the problem was? thanks.



Nov 14 06:25:07 localhost kernel: BUG: soft lockup - CPU#3 stuck for 37s! [xfsaild/dm-0:487]
Nov 14 06:25:07 localhost kernel: Modules linked in: fuse btrfs zlib_deflate raid6_pq xor vfat msdos fat ext4 mbcache jbd2 binfmt_misc ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter vmw_vsock_vmci_transport vsock coretemp crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ppdev vmw_balloon pcspkr sg parport_pc parport shpchp i2c_piix4 vmw_vmci ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi sd_mod crc_t10dif crct10dif_generic serio_raw crct10dif_pclmul
Nov 14 06:25:07 localhost kernel: crct10dif_common vmwgfx crc32c_intel drm_kms_helper ttm drm ata_piix vmxnet3 libata i2c_core vmw_pvscsi floppy dm_mirror dm_region_hash dm_log dm_mod
Nov 14 06:25:07 localhost kernel: CPU: 3 PID: 487 Comm: xfsaild/dm-0 Tainted: G L ------------ 3.10.0-327.el7.x86_64 #1
Nov 14 06:25:07 localhost kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/30/2014
Nov 14 06:25:07 localhost kernel: task: ffff880fe4ac9700 ti: ffff880fe33f4000 task.ti: ffff880fe33f4000
Nov 14 06:25:07 localhost kernel: RIP: 0010:[<ffffffff8163ca4b>] [<ffffffff8163ca4b>] _raw_spin_unlock_irqrestore+0x1b/0x40
Nov 14 06:25:07 localhost kernel: RSP: 0018:ffff880fe33f7b68 EFLAGS: 00000282
Nov 14 06:25:07 localhost kernel: RAX: 0000000000000000 RBX: ffff880fe33f7b30 RCX: 0000000000000200
Nov 14 06:25:07 localhost kernel: RDX: ffffc90006060000 RSI: 0000000000000282 RDI: 0000000000000282
Nov 14 06:25:07 localhost kernel: RBP: ffff880fe33f7b70 R08: 0000000000000000 R09: ffff8805e761ec00
Nov 14 06:25:07 localhost kernel: R10: ffff880fe471a000 R11: ffff880fe88db800 R12: ffff880b28fc9f00
Nov 14 06:25:07 localhost kernel: R13: 0000000000000020 R14: ffffffff8141e59f R15: ffff880fe33f7ae0
Nov 14 06:25:07 localhost kernel: FS: 0000000000000000(0000) GS:ffff88103fcc0000(0000) knlGS:0000000000000000
Nov 14 06:25:07 localhost kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 14 06:25:07 localhost kernel: CR2: 00007ff539c0c810 CR3: 0000000fe7528000 CR4: 00000000001407e0
Nov 14 06:25:07 localhost kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 14 06:25:07 localhost kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Nov 14 06:25:07 localhost kernel: Stack:
Nov 14 06:25:07 localhost kernel: 0000000000000000 ffff880fe33f7be0 ffffffffa00554f7 ffff8805e761ec00
Nov 14 06:25:07 localhost kernel: ffff880fe471a000 ffffffff8141e0c0 0000000000000002 ffff880fe88dc754
Nov 14 06:25:07 localhost kernel: ffff880fe31e6a00 0000000000000282 ffff880b28fc9f80 0000000000000000
Nov 14 06:25:07 localhost kernel: Call Trace:
Nov 14 06:25:07 localhost kernel: [<ffffffffa00554f7>] pvscsi_queue+0x3b7/0x5c0 [vmw_pvscsi]
Nov 14 06:25:07 localhost kernel: [<ffffffff8141e0c0>] ? scsi_kmap_atomic_sg+0x190/0x190
Nov 14 06:25:07 localhost kernel: [<ffffffff81417b1a>] scsi_dispatch_cmd+0xaa/0x230
Nov 14 06:25:07 localhost kernel: [<ffffffff81420aa1>] scsi_request_fn+0x501/0x770
Nov 14 06:25:07 localhost kernel: [<ffffffff812c73e3>] __blk_run_queue+0x33/0x40
Nov 14 06:25:07 localhost kernel: [<ffffffff812c749a>] queue_unplugged+0x2a/0xa0
Nov 14 06:25:07 localhost kernel: [<ffffffff812cbcc5>] blk_flush_plug_list+0x185/0x230
Nov 14 06:25:07 localhost kernel: [<ffffffff812cc124>] blk_finish_plug+0x14/0x40
Nov 14 06:25:07 localhost kernel: [<ffffffffa0222a79>] __xfs_buf_delwri_submit+0x1e9/0x250 [xfs]
Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] ? xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
Nov 14 06:25:07 localhost kernel: [<ffffffffa024e6b0>] xfsaild+0x240/0x5e0 [xfs]
Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
Nov 14 06:25:07 localhost kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Nov 14 06:25:07 localhost kernel: [<ffffffff81645858>] ret_from_fork+0x58/0x90
Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Nov 14 06:25:07 localhost kernel: Code: 08 e8 aa 72 a4 ff 5d c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 48 89 f3 0f 1f 44 00 00 66 83 07 02 48 89 df 57 9d <0f> 1f 44 00 00 5b 5d c3 0f 1f 44 00 00 8b 37 f0 66 83 07 02 f6









share|improve this question




























    up vote
    0
    down vote

    favorite












    System reboot after hung, the soft lockup message keeps come out. since the vmcore not enable before reboot, no vmcore. kernel: 3.10.0-327.el7.x86_64.



    If someone has the similar problems before, do you know what the problem was? thanks.



    Nov 14 06:25:07 localhost kernel: BUG: soft lockup - CPU#3 stuck for 37s! [xfsaild/dm-0:487]
    Nov 14 06:25:07 localhost kernel: Modules linked in: fuse btrfs zlib_deflate raid6_pq xor vfat msdos fat ext4 mbcache jbd2 binfmt_misc ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter vmw_vsock_vmci_transport vsock coretemp crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ppdev vmw_balloon pcspkr sg parport_pc parport shpchp i2c_piix4 vmw_vmci ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi sd_mod crc_t10dif crct10dif_generic serio_raw crct10dif_pclmul
    Nov 14 06:25:07 localhost kernel: crct10dif_common vmwgfx crc32c_intel drm_kms_helper ttm drm ata_piix vmxnet3 libata i2c_core vmw_pvscsi floppy dm_mirror dm_region_hash dm_log dm_mod
    Nov 14 06:25:07 localhost kernel: CPU: 3 PID: 487 Comm: xfsaild/dm-0 Tainted: G L ------------ 3.10.0-327.el7.x86_64 #1
    Nov 14 06:25:07 localhost kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/30/2014
    Nov 14 06:25:07 localhost kernel: task: ffff880fe4ac9700 ti: ffff880fe33f4000 task.ti: ffff880fe33f4000
    Nov 14 06:25:07 localhost kernel: RIP: 0010:[<ffffffff8163ca4b>] [<ffffffff8163ca4b>] _raw_spin_unlock_irqrestore+0x1b/0x40
    Nov 14 06:25:07 localhost kernel: RSP: 0018:ffff880fe33f7b68 EFLAGS: 00000282
    Nov 14 06:25:07 localhost kernel: RAX: 0000000000000000 RBX: ffff880fe33f7b30 RCX: 0000000000000200
    Nov 14 06:25:07 localhost kernel: RDX: ffffc90006060000 RSI: 0000000000000282 RDI: 0000000000000282
    Nov 14 06:25:07 localhost kernel: RBP: ffff880fe33f7b70 R08: 0000000000000000 R09: ffff8805e761ec00
    Nov 14 06:25:07 localhost kernel: R10: ffff880fe471a000 R11: ffff880fe88db800 R12: ffff880b28fc9f00
    Nov 14 06:25:07 localhost kernel: R13: 0000000000000020 R14: ffffffff8141e59f R15: ffff880fe33f7ae0
    Nov 14 06:25:07 localhost kernel: FS: 0000000000000000(0000) GS:ffff88103fcc0000(0000) knlGS:0000000000000000
    Nov 14 06:25:07 localhost kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Nov 14 06:25:07 localhost kernel: CR2: 00007ff539c0c810 CR3: 0000000fe7528000 CR4: 00000000001407e0
    Nov 14 06:25:07 localhost kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Nov 14 06:25:07 localhost kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Nov 14 06:25:07 localhost kernel: Stack:
    Nov 14 06:25:07 localhost kernel: 0000000000000000 ffff880fe33f7be0 ffffffffa00554f7 ffff8805e761ec00
    Nov 14 06:25:07 localhost kernel: ffff880fe471a000 ffffffff8141e0c0 0000000000000002 ffff880fe88dc754
    Nov 14 06:25:07 localhost kernel: ffff880fe31e6a00 0000000000000282 ffff880b28fc9f80 0000000000000000
    Nov 14 06:25:07 localhost kernel: Call Trace:
    Nov 14 06:25:07 localhost kernel: [<ffffffffa00554f7>] pvscsi_queue+0x3b7/0x5c0 [vmw_pvscsi]
    Nov 14 06:25:07 localhost kernel: [<ffffffff8141e0c0>] ? scsi_kmap_atomic_sg+0x190/0x190
    Nov 14 06:25:07 localhost kernel: [<ffffffff81417b1a>] scsi_dispatch_cmd+0xaa/0x230
    Nov 14 06:25:07 localhost kernel: [<ffffffff81420aa1>] scsi_request_fn+0x501/0x770
    Nov 14 06:25:07 localhost kernel: [<ffffffff812c73e3>] __blk_run_queue+0x33/0x40
    Nov 14 06:25:07 localhost kernel: [<ffffffff812c749a>] queue_unplugged+0x2a/0xa0
    Nov 14 06:25:07 localhost kernel: [<ffffffff812cbcc5>] blk_flush_plug_list+0x185/0x230
    Nov 14 06:25:07 localhost kernel: [<ffffffff812cc124>] blk_finish_plug+0x14/0x40
    Nov 14 06:25:07 localhost kernel: [<ffffffffa0222a79>] __xfs_buf_delwri_submit+0x1e9/0x250 [xfs]
    Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] ? xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
    Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
    Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
    Nov 14 06:25:07 localhost kernel: [<ffffffffa024e6b0>] xfsaild+0x240/0x5e0 [xfs]
    Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
    Nov 14 06:25:07 localhost kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
    Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
    Nov 14 06:25:07 localhost kernel: [<ffffffff81645858>] ret_from_fork+0x58/0x90
    Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
    Nov 14 06:25:07 localhost kernel: Code: 08 e8 aa 72 a4 ff 5d c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 48 89 f3 0f 1f 44 00 00 66 83 07 02 48 89 df 57 9d <0f> 1f 44 00 00 5b 5d c3 0f 1f 44 00 00 8b 37 f0 66 83 07 02 f6









    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      System reboot after hung, the soft lockup message keeps come out. since the vmcore not enable before reboot, no vmcore. kernel: 3.10.0-327.el7.x86_64.



      If someone has the similar problems before, do you know what the problem was? thanks.



      Nov 14 06:25:07 localhost kernel: BUG: soft lockup - CPU#3 stuck for 37s! [xfsaild/dm-0:487]
      Nov 14 06:25:07 localhost kernel: Modules linked in: fuse btrfs zlib_deflate raid6_pq xor vfat msdos fat ext4 mbcache jbd2 binfmt_misc ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter vmw_vsock_vmci_transport vsock coretemp crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ppdev vmw_balloon pcspkr sg parport_pc parport shpchp i2c_piix4 vmw_vmci ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi sd_mod crc_t10dif crct10dif_generic serio_raw crct10dif_pclmul
      Nov 14 06:25:07 localhost kernel: crct10dif_common vmwgfx crc32c_intel drm_kms_helper ttm drm ata_piix vmxnet3 libata i2c_core vmw_pvscsi floppy dm_mirror dm_region_hash dm_log dm_mod
      Nov 14 06:25:07 localhost kernel: CPU: 3 PID: 487 Comm: xfsaild/dm-0 Tainted: G L ------------ 3.10.0-327.el7.x86_64 #1
      Nov 14 06:25:07 localhost kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/30/2014
      Nov 14 06:25:07 localhost kernel: task: ffff880fe4ac9700 ti: ffff880fe33f4000 task.ti: ffff880fe33f4000
      Nov 14 06:25:07 localhost kernel: RIP: 0010:[<ffffffff8163ca4b>] [<ffffffff8163ca4b>] _raw_spin_unlock_irqrestore+0x1b/0x40
      Nov 14 06:25:07 localhost kernel: RSP: 0018:ffff880fe33f7b68 EFLAGS: 00000282
      Nov 14 06:25:07 localhost kernel: RAX: 0000000000000000 RBX: ffff880fe33f7b30 RCX: 0000000000000200
      Nov 14 06:25:07 localhost kernel: RDX: ffffc90006060000 RSI: 0000000000000282 RDI: 0000000000000282
      Nov 14 06:25:07 localhost kernel: RBP: ffff880fe33f7b70 R08: 0000000000000000 R09: ffff8805e761ec00
      Nov 14 06:25:07 localhost kernel: R10: ffff880fe471a000 R11: ffff880fe88db800 R12: ffff880b28fc9f00
      Nov 14 06:25:07 localhost kernel: R13: 0000000000000020 R14: ffffffff8141e59f R15: ffff880fe33f7ae0
      Nov 14 06:25:07 localhost kernel: FS: 0000000000000000(0000) GS:ffff88103fcc0000(0000) knlGS:0000000000000000
      Nov 14 06:25:07 localhost kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      Nov 14 06:25:07 localhost kernel: CR2: 00007ff539c0c810 CR3: 0000000fe7528000 CR4: 00000000001407e0
      Nov 14 06:25:07 localhost kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      Nov 14 06:25:07 localhost kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Nov 14 06:25:07 localhost kernel: Stack:
      Nov 14 06:25:07 localhost kernel: 0000000000000000 ffff880fe33f7be0 ffffffffa00554f7 ffff8805e761ec00
      Nov 14 06:25:07 localhost kernel: ffff880fe471a000 ffffffff8141e0c0 0000000000000002 ffff880fe88dc754
      Nov 14 06:25:07 localhost kernel: ffff880fe31e6a00 0000000000000282 ffff880b28fc9f80 0000000000000000
      Nov 14 06:25:07 localhost kernel: Call Trace:
      Nov 14 06:25:07 localhost kernel: [<ffffffffa00554f7>] pvscsi_queue+0x3b7/0x5c0 [vmw_pvscsi]
      Nov 14 06:25:07 localhost kernel: [<ffffffff8141e0c0>] ? scsi_kmap_atomic_sg+0x190/0x190
      Nov 14 06:25:07 localhost kernel: [<ffffffff81417b1a>] scsi_dispatch_cmd+0xaa/0x230
      Nov 14 06:25:07 localhost kernel: [<ffffffff81420aa1>] scsi_request_fn+0x501/0x770
      Nov 14 06:25:07 localhost kernel: [<ffffffff812c73e3>] __blk_run_queue+0x33/0x40
      Nov 14 06:25:07 localhost kernel: [<ffffffff812c749a>] queue_unplugged+0x2a/0xa0
      Nov 14 06:25:07 localhost kernel: [<ffffffff812cbcc5>] blk_flush_plug_list+0x185/0x230
      Nov 14 06:25:07 localhost kernel: [<ffffffff812cc124>] blk_finish_plug+0x14/0x40
      Nov 14 06:25:07 localhost kernel: [<ffffffffa0222a79>] __xfs_buf_delwri_submit+0x1e9/0x250 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] ? xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa024e6b0>] xfsaild+0x240/0x5e0 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
      Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
      Nov 14 06:25:07 localhost kernel: [<ffffffff81645858>] ret_from_fork+0x58/0x90
      Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
      Nov 14 06:25:07 localhost kernel: Code: 08 e8 aa 72 a4 ff 5d c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 48 89 f3 0f 1f 44 00 00 66 83 07 02 48 89 df 57 9d <0f> 1f 44 00 00 5b 5d c3 0f 1f 44 00 00 8b 37 f0 66 83 07 02 f6









      share|improve this question















      System reboot after hung, the soft lockup message keeps come out. since the vmcore not enable before reboot, no vmcore. kernel: 3.10.0-327.el7.x86_64.



      If someone has the similar problems before, do you know what the problem was? thanks.



      Nov 14 06:25:07 localhost kernel: BUG: soft lockup - CPU#3 stuck for 37s! [xfsaild/dm-0:487]
      Nov 14 06:25:07 localhost kernel: Modules linked in: fuse btrfs zlib_deflate raid6_pq xor vfat msdos fat ext4 mbcache jbd2 binfmt_misc ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter vmw_vsock_vmci_transport vsock coretemp crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ppdev vmw_balloon pcspkr sg parport_pc parport shpchp i2c_piix4 vmw_vmci ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi sd_mod crc_t10dif crct10dif_generic serio_raw crct10dif_pclmul
      Nov 14 06:25:07 localhost kernel: crct10dif_common vmwgfx crc32c_intel drm_kms_helper ttm drm ata_piix vmxnet3 libata i2c_core vmw_pvscsi floppy dm_mirror dm_region_hash dm_log dm_mod
      Nov 14 06:25:07 localhost kernel: CPU: 3 PID: 487 Comm: xfsaild/dm-0 Tainted: G L ------------ 3.10.0-327.el7.x86_64 #1
      Nov 14 06:25:07 localhost kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/30/2014
      Nov 14 06:25:07 localhost kernel: task: ffff880fe4ac9700 ti: ffff880fe33f4000 task.ti: ffff880fe33f4000
      Nov 14 06:25:07 localhost kernel: RIP: 0010:[<ffffffff8163ca4b>] [<ffffffff8163ca4b>] _raw_spin_unlock_irqrestore+0x1b/0x40
      Nov 14 06:25:07 localhost kernel: RSP: 0018:ffff880fe33f7b68 EFLAGS: 00000282
      Nov 14 06:25:07 localhost kernel: RAX: 0000000000000000 RBX: ffff880fe33f7b30 RCX: 0000000000000200
      Nov 14 06:25:07 localhost kernel: RDX: ffffc90006060000 RSI: 0000000000000282 RDI: 0000000000000282
      Nov 14 06:25:07 localhost kernel: RBP: ffff880fe33f7b70 R08: 0000000000000000 R09: ffff8805e761ec00
      Nov 14 06:25:07 localhost kernel: R10: ffff880fe471a000 R11: ffff880fe88db800 R12: ffff880b28fc9f00
      Nov 14 06:25:07 localhost kernel: R13: 0000000000000020 R14: ffffffff8141e59f R15: ffff880fe33f7ae0
      Nov 14 06:25:07 localhost kernel: FS: 0000000000000000(0000) GS:ffff88103fcc0000(0000) knlGS:0000000000000000
      Nov 14 06:25:07 localhost kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      Nov 14 06:25:07 localhost kernel: CR2: 00007ff539c0c810 CR3: 0000000fe7528000 CR4: 00000000001407e0
      Nov 14 06:25:07 localhost kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      Nov 14 06:25:07 localhost kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Nov 14 06:25:07 localhost kernel: Stack:
      Nov 14 06:25:07 localhost kernel: 0000000000000000 ffff880fe33f7be0 ffffffffa00554f7 ffff8805e761ec00
      Nov 14 06:25:07 localhost kernel: ffff880fe471a000 ffffffff8141e0c0 0000000000000002 ffff880fe88dc754
      Nov 14 06:25:07 localhost kernel: ffff880fe31e6a00 0000000000000282 ffff880b28fc9f80 0000000000000000
      Nov 14 06:25:07 localhost kernel: Call Trace:
      Nov 14 06:25:07 localhost kernel: [<ffffffffa00554f7>] pvscsi_queue+0x3b7/0x5c0 [vmw_pvscsi]
      Nov 14 06:25:07 localhost kernel: [<ffffffff8141e0c0>] ? scsi_kmap_atomic_sg+0x190/0x190
      Nov 14 06:25:07 localhost kernel: [<ffffffff81417b1a>] scsi_dispatch_cmd+0xaa/0x230
      Nov 14 06:25:07 localhost kernel: [<ffffffff81420aa1>] scsi_request_fn+0x501/0x770
      Nov 14 06:25:07 localhost kernel: [<ffffffff812c73e3>] __blk_run_queue+0x33/0x40
      Nov 14 06:25:07 localhost kernel: [<ffffffff812c749a>] queue_unplugged+0x2a/0xa0
      Nov 14 06:25:07 localhost kernel: [<ffffffff812cbcc5>] blk_flush_plug_list+0x185/0x230
      Nov 14 06:25:07 localhost kernel: [<ffffffff812cc124>] blk_finish_plug+0x14/0x40
      Nov 14 06:25:07 localhost kernel: [<ffffffffa0222a79>] __xfs_buf_delwri_submit+0x1e9/0x250 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] ? xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa022367f>] xfs_buf_delwri_submit_nowait+0x2f/0x50 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa024e6b0>] xfsaild+0x240/0x5e0 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffffa024e470>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
      Nov 14 06:25:07 localhost kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
      Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
      Nov 14 06:25:07 localhost kernel: [<ffffffff81645858>] ret_from_fork+0x58/0x90
      Nov 14 06:25:07 localhost kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
      Nov 14 06:25:07 localhost kernel: Code: 08 e8 aa 72 a4 ff 5d c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 48 89 f3 0f 1f 44 00 00 66 83 07 02 48 89 df 57 9d <0f> 1f 44 00 00 5b 5d c3 0f 1f 44 00 00 8b 37 f0 66 83 07 02 f6






      linux-kernel






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 2 days ago









      sourcejedi

      21.7k43396




      21.7k43396










      asked 2 days ago









      zinnia

      1




      1






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          The soft lockup seems to be related to disk I/O request processing. On hardware systems, I would check the SMART data and any other available disk health information to exclude the possibility of a hardware problem.



          However, this seems to be a VMware virtual machine, so the first thing to check would be the statistics of the virtualization host: is the host or its storage getting overloaded by all the VMs on it? That could cause long delays in responses to I/O requests. If such a delay lasts over 30 seconds, you'd start getting these soft lockup notifications, even though the root cause might be that there is not enough CPU capacity or storage I/O bandwidth to satisfy the requirements of all the VMs on the host.






          share|improve this answer





















            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "106"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














             

            draft saved


            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f481600%2fkernel-bug-soft-lockup-raw-spin-unlock-irqrestore%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote













            The soft lockup seems to be related to disk I/O request processing. On hardware systems, I would check the SMART data and any other available disk health information to exclude the possibility of a hardware problem.



            However, this seems to be a VMware virtual machine, so the first thing to check would be the statistics of the virtualization host: is the host or its storage getting overloaded by all the VMs on it? That could cause long delays in responses to I/O requests. If such a delay lasts over 30 seconds, you'd start getting these soft lockup notifications, even though the root cause might be that there is not enough CPU capacity or storage I/O bandwidth to satisfy the requirements of all the VMs on the host.






            share|improve this answer

























              up vote
              0
              down vote













              The soft lockup seems to be related to disk I/O request processing. On hardware systems, I would check the SMART data and any other available disk health information to exclude the possibility of a hardware problem.



              However, this seems to be a VMware virtual machine, so the first thing to check would be the statistics of the virtualization host: is the host or its storage getting overloaded by all the VMs on it? That could cause long delays in responses to I/O requests. If such a delay lasts over 30 seconds, you'd start getting these soft lockup notifications, even though the root cause might be that there is not enough CPU capacity or storage I/O bandwidth to satisfy the requirements of all the VMs on the host.






              share|improve this answer























                up vote
                0
                down vote










                up vote
                0
                down vote









                The soft lockup seems to be related to disk I/O request processing. On hardware systems, I would check the SMART data and any other available disk health information to exclude the possibility of a hardware problem.



                However, this seems to be a VMware virtual machine, so the first thing to check would be the statistics of the virtualization host: is the host or its storage getting overloaded by all the VMs on it? That could cause long delays in responses to I/O requests. If such a delay lasts over 30 seconds, you'd start getting these soft lockup notifications, even though the root cause might be that there is not enough CPU capacity or storage I/O bandwidth to satisfy the requirements of all the VMs on the host.






                share|improve this answer












                The soft lockup seems to be related to disk I/O request processing. On hardware systems, I would check the SMART data and any other available disk health information to exclude the possibility of a hardware problem.



                However, this seems to be a VMware virtual machine, so the first thing to check would be the statistics of the virtualization host: is the host or its storage getting overloaded by all the VMs on it? That could cause long delays in responses to I/O requests. If such a delay lasts over 30 seconds, you'd start getting these soft lockup notifications, even though the root cause might be that there is not enough CPU capacity or storage I/O bandwidth to satisfy the requirements of all the VMs on the host.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 2 days ago









                telcoM

                14k11842




                14k11842






























                     

                    draft saved


                    draft discarded



















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f481600%2fkernel-bug-soft-lockup-raw-spin-unlock-irqrestore%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Morgemoulin

                    Scott Moir

                    Souastre