Friday 6 June 2014

LVM Troubleshooting 3

LVM Troubleshooting 3



--> How to Grow an LVM Physical Volume after resizing the disk?
Note : This procedure does have the potential to lose data on the disk if done improperly, we strongly recommend a backup be performed before proceeding.
For example, we have resized a disk from 50Gb to 120Gb
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 VolGroup02 lvm2 a– 50.00g 2.00g
Whilst the underlying storage (eg: /dev/sdb) may have been resized, the partition we are using as a physical volume (eg: /dev/sdb1) remains at the smaller size.
We’ll need to resize the partition, then resize the Physical Volume before you can proceed.
First we would confirm the actual storage size with “fdisk -ul /dev/sdb” and observe the increased disk size. Depending on how the storage is presented, we may need to reboot for this to appear.
We will then need to resize the partition on the disk. We can achieve this by observing the starting sector in fdisk -ul /dev/sdb, then removing the partition with fdisk and re-creating it with the same starting sector but the (default) last sector of the drive as the ending sector. Then write the partition table and confirm the change (and the correct starting sector) with fdisk -ul /dev/sdb.
Now We are ready to pvresize /dev/sdb1 to grow the PV onto the rest of the expanded partition.This will create free extents within the Volume Group which we can then grow a Logical Volume into.
If we run your LV resize with lvresize -r, it will grow the filesystem we have within the Logical Volume as well.
--> Delete a LVM partition
Delete LVM partition from the /etc/fstab For example:
/dev/sda2 / ext3 defaults 1 1
/dev/sda1 /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sd3 swap swap defaults 0 0
/dev/volumegroup/lvm /var ext3 defaults 0 0
umount LVM partition
# umount /dev/volumegroup/lvm
Disable lvm
# lvchange -an /dev/volumegroup/lvm
Delete lvm volume
# lvremove /dev/volumegroup/lvm
Disable volume group
# vgchange -an volumegroup
Delete volume group
# vgremove volumegroup
Delete phisical Volume
# pvremove /dev/sdc1 /dev/sdc2
--> Restore a volume group in Red Hat Enterprise Linux if one of the physical volumes that constitutes the volume group has failed.
NOTE : These commands have the potential to corrupt data, should be executed at one’s own discretion
This procedure requires a recent backup of the LVM configuration. This can be generated with the command vgcfgbackup and is stored in the file /etc/lvm/backup/<volume group name>.
The /etc/lvm/archive directory also contains recent configurations that are created when modifications to the volume group metadata are made. It is recommended that these files be regularly backed up to a safe location so that they will be available if required for recovery purposes.
Assuming a physical volume has been lost that was a part of a volume group the following procedure may be followed. The procedure will replace the failed physical volume with a new disk rendering any remaining logical volumes accessible for recovery purposes.
The procedure for recovery is as follows:
1. Execute the following command to display information about the volume group in question:
# vgdisplay –partial –verbose
The output will be similar to the following (note that the –partial flag is required to activate or manipulate a volume group having one or more physical volumes missing and that use of this flag with LVM2 activation commands (vgchange -a) will force volumes to be activated in a read-only state):
Partial mode. Incomplete volume groups will be activated read-only.
Finding all volume groups
Finding volume group “volGroup00″
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
— Volume group —
VG Name volGroup00
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 33
VG Access read
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 5
Act PV 5
VG Size 776.00 MB
PE Size 4.00 MB
Total PE 194
Alloc PE / Size 194 / 776.00 MB
Free PE / Size 0 / 0
VG UUID PjnqwZ-AYXR-BUyo-9VMN-uSRZ-AFlj-WOaA6z
— Logical volume —
LV Name /dev/volGroup00/myLVM
VG Name volGroup00
LV UUID az6REi-mkt5-sDpS-4TyH-GBj2-cisD-olf6SW
LV Write Access read/write
LV Status available
# open 0
LV Size 776.00 MB
Current LE 194
Segments 5
Allocation inherit
Read ahead sectors 0
Block device 253:0
— Physical volumes —
PV Name /dev/hda8
PV UUID azYDV8-e2DT-oxGi-5S9Q-yVsM-dxoB-DgC4qN
PV Status allocatable
Total PE / Free PE 48 / 0
PV Name /dev/hda10
PV UUID SWICqb-YIbb-g1MW-CY60-AkNQ-gNBu-GCMWOi
PV Status allocatable
Total PE / Free PE 48 / 0
PV Name /dev/hda11
PV UUID pts536-Ycd5-kNHR-VMZY-jZRv-nTx1-XZFrYy
PV Status allocatable
Total PE / Free PE 48 / 0
PV Name /dev/hda14
PV UUID OtIMPe-SZK4-arxr-jGlp-eiHY-2OA6-kyntME
PV Status allocatable
Total PE / Free PE 25 / 0
PV Name unknown device
PV UUID 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j
PV Status allocatable
Total PE / Free PE 25 / 0
Note the PV UUID line:
PV UUID 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j
This line contains the universally unique identifier (UUID) of the physical volume that failed and will be needed in the next step.
2. If the physical volume failed, it must be replaced with a disk or partition that is equal in size or larger than the failed volume. If the disk did not fail but was overwritten or corrupted, the same volume can be re-used. Run the following command to re-initialize the physical volume:
# pvcreate –restorefile /etc/lvm/backup/<volume group name> –uuid <UUID> <device>
In the above command the UUID is the value taken from the output in step 1. In this example the full command would be:
# pvcreate –restorefile /etc/lvm/backup/volGroup00 –uuid 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j /dev/hda15
Couldn’t find device with uuid 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j.
Physical volume “/dev/hda15″ successfully created
Note that when overwriting a previously-used LVM2 physical volume (for example when recovering from a situation where the volume had been inadvertently overwritten) the -ff option must be given to the pvcreate command.
3. Now the new physical volume has been initialized with the UUID of the old physical volume. The volume group metadata may be restored with the following command:
# vgcfgrestore –file /etc/lvm/backup/<volume group name > <volume group name>
Continuing the earlier example the exact command would be:
# vgcfgrestore –file /etc/lvm/backup/volGroup00 volGroup00
Restored volume group volGroup00
4. To check that the new physical volume is intact and the volume group is functioning correctly execute vgdisplay -v.
Note: This procedure will not restore any data lost from a physical volume that has failed and been replaced. If a physical volume has been partially overwritten (for example, the label or metadata regions have been damaged or destroyed) then user data may still exist in the data area of the volume and this may be recovered using standard tools after restoring access to the volume group using these steps.
--> Logical Volume Manager (LVM) snapshot and how do we use it?
Logical Volume Manager (LVM) provides the ability to take a snapshot of any logical volume for the purpose of obtaining a backup of a partition in a consistent state. Traditionally the solution has been to mount the partition read-only, apply table-level write locks to databases or shut down the database engine etc.; all measures which adversely impact availability (but not as much as data loss without a backup will). With LVM snapshots it is possible to obtain a consistent backup without compromising availability.
The LVM snapshot works by logging the changes to the filesystem to the snapshot partition, rather than mirroring the partition. Thus when you create a snapshot partition you do not need to use space equal to the size of the partition that you are taking a snapshot of, but rather the amount of changes that it will undergo during the lifetime of the snapshot. This is a function of both how much data is being written to the partition and also how long you intend keeping the LVM snapshot.
Below example shows about LVM snapshot creation. Here we create a logical volume of 500MB to use to take a snapshot. This will allow 500MB of changes on the volume we are taking a snapshot of during the lifetime of the snapshot.
The following command will create /dev/ops/dbbackup as a snapshot of /dev/ops/databases.
# lvcreate -L500M -s -n dbbackup /dev/ops/databases
lvcreate — WARNING: the snapshot must be disabled if it gets full
lvcreate — INFO: using default snapshot chunk size of 64 KB for “/dev/ops/dbbackup”
lvcreate — doing automatic backup of “ops”
lvcreate — logical volume “/dev/ops/dbbackup” successfully created
Now we create the mount point and mount the snapshot.
# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt/ops/dbbackup

mount: block device /dev/ops/dbbackup is write-protected, mounting read-only
After performing the backup of the snapshot partition we release the snapshot. The snapshot will be automatically released when it fills up, but maintaining incurs a system overhead in the meantime.
# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup

lvremove — do you really want to remove “/dev/ops/dbbackup”? [y/n]: y
lvremove — doing automatic backup of volume group “ops”
lvremove — logical volume “/dev/ops/dbbackup” successfully removed
--> How do we Create New LVM volume from a LVM snapshot?
create a sparse file, providing room for the volumegroup
# dd if=/dev/zero of=file bs=1 count=1 seek=3G
setup the sparse file as block device
# losetup -f file
create a volumegroup and volume
# pvcreate /dev/loop0
# vgcreate vgtest /dev/loop0
# lvcreate -l 10 -n lvoriginal vg00
create a filesystem, create content on it
# mkdir /mnt/tmp /mnt/tmp2
# mkfs.ext4 /dev/vg00/lvoriginal
# mount /dev/vg00/lvoriginal /mnt/tmp
# echo state1 >>/mnt/tmp/contents
create the mirror – the volumegroup has to have enough free PE’s
# lvconvert -m 1 /dev/vg00/lvoriginal
# lvconvert –splitmirrors 1 -n lvclone /dev/vg00/lvoriginal
change the contents on the original volume
# echo state2 >>/mnt/tmp/contents
now access the clone volume and verify it represents the originals old state
# mount /dev/vgtest/lvolnew /mnt/tmp2
# cat /mnt/tmp2/contents
# cat /mnt/tmp/contents
--> How can I boot from an LVM snapshot on Red Hat Enterprise Linux?

The snapshot has to be in the same volume group as the original root logical volume.Often, other file systems should be snapshotted at the same time (eg. /var, /usr) if they are separate file systems to root.
Procedure:
Step 1 : Create a snapshot of any local filesystems (for RHEL6, it is recommended that you do not put a ‘-’ in the name as it makes addressing the volume more complicated):
# lvcreate -s -n varsnapshot -L 1G /dev/VolGroup00/var
# lvcreate -s -n rootsnapshot -L 2G /dev/VolGroup00/root
Step 2 : Mount the root snapshot so we can change the /etc/fstab of the snapshot version:
# mkdir /mnt/snapshot
# mount /dev/VolGroup00/rootsnapshot /mnt/snapshot
# vi /mnt/snapshot/etc/fstab
Step 3 : Change the entries in /mnt/etc/fstab to point to the snapshot volumes rather than the original devices:
/dev/VolGroup00/rootsnapshot / ext3 defaults 1 1
/dev/VolGroup00/varsnapshot /var ext3 defaults 1 2
Step 4 : Now unmount the snapshot:
# cd /tmp
# umount /mnt/snapshot
Step 5 : Add an entry in grub to boot into the snapshot:
Step 5a For Red Hat Enterprise Linux 5, copy the current default grub.conf entry, and make a new entry pointing to the snapshot version:
/boot/grub/grub.conf entry before:

default=0

title Red Hat Enterprise Linux 5 (2.6.18-194.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/root
initrd /initrd-2.6.18-194.el5.img
After:

default=0

title Snapshot (2.6.18-194.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/rootsnapshot
initrd /initrd-2.6.18-194.el5.img
title Red Hat Enterprise Linux 5 (2.6.18-194.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/root
initrd /initrd-2.6.18-194.el5.img
Step 5b For Red Hat Enterprise Linux 6, copy the default grub.conf entry, and maake a new entry pointing to the snapshot version:
/boot/grub/grub.conf before:

default=0

title Red Hat Enterprise Linux Server (2.6.32-279.9.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-rootvol rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto rd_LVM_LV=VolGroup00/rootvol KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img
/boot/grub/grub.conf after:

default=0

title Snapshot (2.6.32-279.9.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-rootsnapshot rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto rd_LVM_LV=VolGroup00/rootvol KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img
title Red Hat Enterprise Linux Server (2.6.32-279.9.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-rootvol rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto rd_LVM_LV=VolGroup00/rootvol KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img
NOTE: On the grub menu entry on RHEL6, change “root=” to point to the snapshot but DO NOT change rd_LVM_LV to point to the snapshot, because this will prevent both the real and snapshot devices from activating on boot. Snapshots cannot be activated without the real volume being activated as well.
Step 6 : Now you can boot into the snapshot by choosing the correct grub menu entry. To boot back onto the real LVM device, just select the original grub menu entry.
Step 7 : You can verify that you are booted into the snapshot version by checking which LVM device is mounted:
# mount | grep VolGroup00
/dev/mapper/VolGroup00-rootsnapshot on / type ext4 (rw)
/dev/mapper/VolGroup00-varsnapshot on /var type ext4 (rw)
You can remove the snapshot with the following procedure:
Step 1) Remove the grub entry from /boot/grub/grub.conf for your snapshot volume.
Step 2) Boot into (or ensure you are already booted into) the real LVM volume:
# mount | grep VolGroup00
/dev/mapper/VolGroup00-root on / type ext4 (rw)
/dev/mapper/VolGroup00-var on /var type ext4 (rw)
Step 3) Remove the snapshot volumes:
# lvremove /dev/VolGroup00/rootsnapshot
# lvremove /dev/VolGroup00/varsnapshot
Summary
To boot into an LVM snapshot of the root filesystem, you must change only the following locations:
/etc/fstab on the LVM snapshot volume (do not change fstab on the real volume)
/boot/grub/grub.conf to add an entry that points to the snapshot device as the root disk.
There is no need to rebuild initrd to boot into the snapshot.

No comments :

Post a Comment