web-dev-qa-db-ja.com

LVMサイズ変更プロセス

ここから、可能であれば、データを失うことなく/ usr、/ var、および/ homeパーティションのスペースのサイズを変更し、リモートサーバーであるため通常モード(リカバリモードなし)でサイズを変更する方法を説明します。他の投稿( mdadmを使用してRAID1アレイのサイズを変更する方法?Linux:データを含むパーティションからソフトウェアRAID 1を作成する )およびドキュメントを見ましたが、プロセスについてはよくわかりません。ありがとう。

rdw@u18702824:~$ df -h
  Filesystem             Size  Used Avail Use% Mounted on
  udev                   7.8G  4.0K  7.8G   1% /dev
  tmpfs                  1.6G  1.4M  1.6G   1% /run
  /dev/md1               4.0G  3.4G  549M  87% /
  none                   4.0K     0  4.0K   0% /sys/fs/cgroup
  none                   5.0M     0  5.0M   0% /run/lock
  none                   7.8G  8.0K  7.8G   1% /run/shm
  none                   100M   28K  100M   1% /run/user
  /dev/mapper/vg00-usr   4.8G  4.8G     0 100% /usr
  /dev/mapper/vg00-var   4.8G  2.5G  2.1G  55% /var
  /dev/mapper/vg00-home  4.8G  2.9G  1.8G  62% /home

======================================================

rdw@u18702824:~$ Sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Mon Feb  6 14:19:22 2017
     Raid Level : raid1
     Array Size : 4194240 (4.00 GiB 4.29 GB)
  Used Dev Size : 4194240 (4.00 GiB 4.29 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Feb 23 12:10:34 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 3562dace:6f38a4cf:1f51fb89:78ee93fe
         Events : 0.72

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

======================================================

rdw@u18702824:~$ Sudo mdadm --detail /dev/md3
/dev/md3:
        Version : 0.90
  Creation Time : Mon Feb  6 14:19:23 2017
     Raid Level : raid1
     Array Size : 1458846016 (1391.26 GiB 1493.86 GB)
  Used Dev Size : 1458846016 (1391.26 GiB 1493.86 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Thu Feb 23 12:10:46 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 52d90469:78a9a458:1f51fb89:78ee93fe
         Events : 0.1464

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3

======================================================

rdw@u18702824:~$ Sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg00/usr
  LV Name                usr
  VG Name                vg00
  LV UUID                dwihnp-aXSl-rCly-MvlH-FoxI-hDrv-mDnVNJ
  LV Write Access        read/write
  LV Creation Host, time ,
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                I5eIwR-dunS-3ua2-IrSw-3C30-cxOS-zLj3a4
  LV Write Access        read/write
  LV Creation Host, time ,
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

  --- Logical volume ---
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                4tYJyU-wlnF-qERG-95Wt-2rR4-Gyfs-NofCZd
  LV Write Access        read/write
  LV Creation Host, time ,
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

======================================================  

rdw@u18702824:~$ Sudo vgdisplay
  --- Volume group ---
  VG Name               vg00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.36 TiB
  PE Size               4.00 MiB
  Total PE              356163
  Alloc PE / Size       3840 / 15.00 GiB
  Free  PE / Size       352323 / 1.34 TiB
  VG UUID               av08Kn-EzMV-2mie-HE97-cHcr-oL1x-qmYMz6

======================================================

rdw@u18702824:~$ Sudo lvscan
  ACTIVE            '/dev/vg00/usr' [5.00 GiB] inherit
  ACTIVE            '/dev/vg00/var' [5.00 GiB] inherit
  ACTIVE            '/dev/vg00/home' [5.00 GiB] inherit

rdw@u18702824:~$ Sudo lvmdiskscan
  /dev/ram0      [      64.00 MiB]
  /dev/vg00/usr  [       5.00 GiB]
  /dev/ram1      [      64.00 MiB]
  /dev/md1       [       4.00 GiB]
  /dev/vg00/var  [       5.00 GiB]
  /dev/ram2      [      64.00 MiB]
  /dev/sda2      [       2.00 GiB]
  /dev/vg00/home [       5.00 GiB]
  /dev/ram3      [      64.00 MiB]
  /dev/md3       [       1.36 TiB] LVM physical volume
  /dev/ram4      [      64.00 MiB]
  /dev/ram5      [      64.00 MiB]
  /dev/ram6      [      64.00 MiB]
  /dev/ram7      [      64.00 MiB]
  /dev/ram8      [      64.00 MiB]
  /dev/ram9      [      64.00 MiB]
  /dev/ram10     [      64.00 MiB]
  /dev/ram11     [      64.00 MiB]
  /dev/ram12     [      64.00 MiB]
  /dev/ram13     [      64.00 MiB]
  /dev/ram14     [      64.00 MiB]
  /dev/ram15     [      64.00 MiB]
  /dev/sdb2      [       2.00 GiB]
  3 disks
  19 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

======================================================

rdw@u18702824:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb1[1] sda1[0]
      4194240 blocks [2/2] [UU]

md3 : active raid1 sdb3[1] sda3[0]
      1458846016 blocks [2/2] [UU]

unused devices: <none>
2
Héctor

したがって、ボリュームグループには1TBを超える空き容量があります。

rdw@u18702824:~$ Sudo vgdisplay
  --- Volume group ---
  VG Name               vg00
  ...
  VG Size               1.36 TiB
  PE Size               4.00 MiB
  Total PE              356163
  Alloc PE / Size       3840 / 15.00 GiB
  Free  PE / Size       352323 / 1.34 TiB

/usrにさらに10GBを追加して合計15GBにする場合は、次のようなコマンドを使用する必要があります(ファイルシステムがext2、3、または4であると仮定) {ref}

lvextend -L+10G /dev/vg00/usr
resize2fs /dev/vg00/usr
2
Zoredache