Wednesday 4 June 2014

Linux LVM Troubleshooting 1

Linux LVM Troubleshooting 1

How do I reduce LVM logical voulme in Red Hat Enterprise linux?
First of all make sure to have sufficient disk space available before reducing logical volume (otherwise it would result in data loss). Also, make sure to have valid data backup before going forward and making any changes.
It’s important to shrink the filesystem before reducing it to prevent the data loss/corruption. The resize2fs program will resize ext2, ext3 or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device. Refer following steps to reduce the logical volume by 500GB for example,
1) Unmount the filesystem # umount /dev/VG00/LV00
2) Scan and check the filesystem to be at safer side: # e2fsck /dev/VG00/LV00
3) Shrink the filesystem with resize2fs as follows: # resize2fs /dev/VG00/LV00 500M
where 500M is amount of disk space to which you wish to shrink the disk data.
4) Reduce the logical volume by 500GB with lvreduce: # lvreduce -L -500G VG00/LV00
It will reduces the size of logical volume LV00 in volume group VG00 by 500GB.
5) Mount the filesystem and check the disk space with df -h command.
What is the difference between “Linux” and “Linux LVM” partition types?
It does not have any specific advantage. both partition type can work with LVM.
The type id is only for informative purposes. Logical volumes don’t have a concept of a “type”, they’re just block devices. They do not have a partition ID or type. They are composed of physical extents (PE) which may be spread over multiple physical volumes (PV), each of which could be a partition or a complete disk. LVM logical volumes are normally treated like individual partitions, not as disks, so there’s no partition table and therefore no partition type id to look for.
How do we Log all LVM commands, that we execute on the machine?
The default LVM configuration do not log the commands that are used in a shell or in a GUI (e.g system-config-lvm) environment. But it’s possible to active the log in the lvm.conf
To active the log follow the following steps.
Make a copy of the original lvm.conf file
# cp /etc/lvm/lvm.conf /root
Edit the lvm.conf file and find the log section. It starts as ‘log {‘. The default configuration comes like the following:
log {
verbose = 0
syslog = 1
#file = “/var/log/lvm2.log”
overwrite = 0
level = 0
indent = 1
command_names = 0
prefix = ” “
# activation = 0
}
It’s necessary only 2 modifications to active the log of the LVM:
- Uncomment the line # file = “/var/log/lvm2″
- Change the level = 0 to a value between 2 and 7.
Remind that the 7 is more verbose than 2.
Save and exit the file.
It’s not necessary restart any service, the file /var/log/lvm2.log will be created when any command from lvm run (e.g lvs, lvextend, lvresize etc).


How do I create LVM-backed raw devices with udev in RHEL6?
Edit /etc/udev/rules.d/60-raw.rules and add lines similar to the following:
ACTION==”add”, ENV{DM_VG_NAME}==”VolGroup00″, ENV{DM_LV_NAME}==”LogVol00″, RUN+=”/bin/raw /dev/raw/raw1 %N”
where VolGroup00 is your Volume Group name, and LogVol00 is your Logical Volume name.
To set permissions on these devices, we can do so as per usual:
ACTION==”add”, KERNEL==”raw*”, OWNER==”username”, GROUP==”groupname”, MODE==”0660″


What is optimal stripe count for better performance in LVM?
The maximum number of stripes in LVM is 128. The “optimal number of stripes” depends on the storage devices used for the LVM logical volume.
In the case of local physical disks connected via SAS or some other protocol, the optimal number of stripes is equal to the number of disks.
In the case of SAN storage presented to the Linux machine in the form of LUNs, there may be no advantage to a striped logical volume (a single LUN may provide optimal performance), or there may be an advantage. With a LUN coming from SAN storage, the LUN is often a chunk of storage which comes from a SAN volume, and the SAN volume often is a RAID volume with some sort of striping and/or parity. In this case there is no advantage.
In some cases where SAN characteristics are unknown or changing, performance testing of two LVM volumes with a differing number of stripes may be worthwhile. For simple sequential IO performance, “dd” can be used (for random another tool will be needed). Make the second LVM logical volume containing twice the number of stripes as the first, and compare performance. Continue increasing stripes and comparing performance in this manner until there is no noticeable performance improvement.

No comments :

Post a Comment