wiki:TipAndDoc/storage/LVM

Version 10 (modified by mitty, 10 years ago) (diff)

--

lvchange/vgchange -ay

  • root@Knoppix:~# vgdisplay -v
        Finding all volume groups
        Finding volume group "vgnfs"
      --- Volume group ---
      VG Name               vgnfs
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  2
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               1.94 GB
      PE Size               32.00 MB
      Total PE              62
      Alloc PE / Size       48 / 1.50 GB
      Free  PE / Size       14 / 448.00 MB
      VG UUID               I6vVoh-6gCJ-9uvA-v2MV-Fyva-7J8v-Cvftfi
    
      --- Logical volume ---
      LV Name                /dev/vgnfs/drbd
      VG Name                vgnfs
      LV UUID                dNxdNj-hZCZ-GMrC-woMk-0hA2-f3oR-sBsrHI
      LV Write Access        read/write
      LV Status              NOT available
      LV Size                1.50 GB
      Current LE             48
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
    
      --- Physical volumes ---
      PV Name               /dev/md0
      PV UUID               Z2JXRP-fa5g-SYS5-xzMs-Lq8C-1Jbh-QPKihr
      PV Status             allocatable
      Total PE / Free PE    62 / 14
    
    
    • LV is "NOT available"
  • root@Knoppix:~# vgdisplay -v
        Finding all volume groups
        Finding volume group "vgnfs"
      --- Volume group ---
      VG Name               vgnfs
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  2
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               1.94 GB
      PE Size               32.00 MB
      Total PE              62
      Alloc PE / Size       48 / 1.50 GB
      Free  PE / Size       14 / 448.00 MB
      VG UUID               I6vVoh-6gCJ-9uvA-v2MV-Fyva-7J8v-Cvftfi
    
      --- Logical volume ---
      LV Name                /dev/vgnfs/drbd
      VG Name                vgnfs
      LV UUID                dNxdNj-hZCZ-GMrC-woMk-0hA2-f3oR-sBsrHI
      LV Write Access        read/write
      LV Status              available
      # open                 0
      LV Size                1.50 GB
      Current LE             48
      Segments               1
      Allocation             inherit
      Read ahead sectors     0
      Block device           254:0
    
      --- Physical volumes ---
      PV Name               /dev/md0
      PV UUID               Z2JXRP-fa5g-SYS5-xzMs-Lq8C-1Jbh-QPKihr
      PV Status             allocatable
      Total PE / Free PE    62 / 14
    
    

Can't deactivate

  • Shutting down LVM Volume Groups Can't deactivate volume group "vgroot" with 1 open logical volumes(s) failed!
  • Debian 6.0 with 2.6.39-bpo.2-amd64
  • lvm2: 2.02.66-5
  • mitty@shizuku-debian:~$ ls /etc/rc6.d/ -l
    lrwxrwxrwx 1 root root  21 Oct 22 16:11 K01mpt-statusd -> ../init.d/mpt-statusd
    lrwxrwxrwx 1 root root  17 Oct 22 16:07 K01urandom -> ../init.d/urandom
    lrwxrwxrwx 1 root root  18 Oct 22 16:11 K02sendsigs -> ../init.d/sendsigs
    lrwxrwxrwx 1 root root  17 Oct 22 16:11 K03rsyslog -> ../init.d/rsyslog
    lrwxrwxrwx 1 root root  20 Oct 22 16:11 K04hwclock.sh -> ../init.d/hwclock.sh
    lrwxrwxrwx 1 root root  22 Oct 22 16:11 K04umountnfs.sh -> ../init.d/umountnfs.sh
    lrwxrwxrwx 1 root root  20 Oct 22 16:11 K05networking -> ../init.d/networking
    lrwxrwxrwx 1 root root  18 Oct 22 16:11 K06ifupdown -> ../init.d/ifupdown
    lrwxrwxrwx 1 root root  18 Oct 22 16:11 K07umountfs -> ../init.d/umountfs
    lrwxrwxrwx 1 root root  14 Oct 22 16:11 K08lvm2 -> ../init.d/lvm2
    lrwxrwxrwx 1 root root  20 Oct 22 16:11 K09umountroot -> ../init.d/umountroot
    lrwxrwxrwx 1 root root  16 Oct 22 16:11 K10reboot -> ../init.d/reboot
    -rw-r--r-- 1 root root 351 Jan  1  2011 README
    

LVM RAID

  • https://wiki.gentoo.org/wiki/LVM#Different_storage_allocation_methods

    It is not possible to stripe an existing volume, nor reshape the stripes across more/less physical volumes, nor to convert to a different RAID level/linear volume. A stripe set can be mirrored. It is possible to extend a stripe set across additional physical volumes, but they must be added in multiples of the original stripe set (which will effectively linearly append a new stripe set).

    • RAID1,4,5,6でreshapeなどが使えないのは結構厳しい制限に思える。MD-RAIDでは可能なので、そのうちサポートされるかも知れない

LVM with MD-RAID

  • mdadm と LVM で作る、全手動 BeyondRAID もどき - 守破離

    ざっくり言うと、RAID1 か RAID5 でHDDを横断する領域をいくつか作って、その領域をまとめあげる事で大容量のストレージを作る感じですね。 これのイケてる点は HDD が 1 台消えても大丈夫な上に、状況によっては HDD を 1 台交換するだけでストレージの容量がアップすること。 最低 2 台の HDD を大容量の物に交換すれば確実に容量が増えます。

Thin provisioning

  • (CentOS) LVM thinpool snapshots broken in 6.5?

    For the people who run into this as well: This is apparently a feature and not a bug. Thin provisioning snapshots are no longer automatically activated and a "skip activation" flag is set during creation by default. One has to add the "-K" option to "lvchange -ay <snapshot-volume>" to have lvchange ignore this flag and activate the volume for real. "-k" can be used on lvcreate to not add this flag to the volume. See man lvchange/lvcreate for more details. /etc/lvm/lvm.conf also contains a "auto_set_activation_skip" option now that controls this.

    Apparently this was changed in 6.5 but the changes were not mentioned in the release notes.

    • thin snapshotはデフォルトではinactiveで作成され、activateするにはlvchangeに'-K'オプションが必要となった

Attachments (1)

Download all attachments as: .zip