Thursday 5 June 2014

Linux LVM Troubleshooting 2

Linux LVM Troubleshooting 2


--> LVM commands are failing with this error: Can’t open exclusively. Mounted filesystem?
We often face this problem that we cannot create a logical volume on a disk that is part of multipathed storage, after creating a new disk partition created with parted.
Multipathing uses the mpath* name to refer to storage rather than the sd* disk name, since multiple sd* names can refer to the same storage.
The below error message appears because the multipath daemon is accessing the sd* device, and the lvm commands cannot open the device exclusively. Commands such as “pvcreate /dev/sd*” fail with the Error: Can’t open exclusively. Mounted Filesystem?
To resolve the issue:
Run “# fuser -m -v /dev/sd*” to see what processes are accessing the device.
If multipathd appears, run “multipath -ll” to determine which mpath* device maps to that disk.
Run the command “pvcreate /dev/mapper/mpath*” to successfully create a physical volume on the device. Continue creating the volume group, logical volume, and filesystem using the correct path to the disk.
-->Recommended region size for mirrored lvm volumes ?
Region size can impact performance, generally for larger region sizes, there will be fewer writes to the log device – this could increase performance. Smaller region sizes lead to faster recovery times after a machine crashes. The default region size of 512KB balances these considerations pretty fairly.
A change in the region size for mirror lvm volume would not result in much performance gain, but if you could simulate the workload, then please try a few different variants for region size to ensure that.
Also, there is a limitation in the cluster infrastructure, cluster mirrors greater than 1.5TB cannot be created with the default region size of 512KB. Users that require larger mirrors should increase the region size from its default to something larger. Failure to increase the region size will cause LVM creation to hang and may hang other LVM commands as well.
As a general guideline for specifying the region size for mirrors that are larger than 1.5TB, you could take your mirror size in terabytes and round up that number to the next power of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2. If your mirror size is 3TB, you could specify -R 4. For a mirror size of 5TB, you could specify -R 8
Above calculation could be used to decide the region size for large mirror lvm volumes of size 16 – 20 TB also. For example, when creating the cluster mirror lvm volume with size of 20 TB, please set a region size of 32 using -R 32 argument with lvcreate command as shown below:
$ lvcreate -m1 -L 20T -R 32 -n mirror vol_group
--> Recreating a partition table that was accidentally deleted that contains an LVM Physical Volume on Red Hat Enterprise Linux?
NOTE: This is a very difficult procedure and does not guarantee that data can be recovered. You may wish to try this procedure on a snapshot of the data first (where possible). Alternatively, seek a data recovery company to assist you with restoring the data.
The location of the LVM2 label can be found on the disk by using “hexedit -C /dev/ | grep LABELONE” (be sure to locate the correct label, not another one that might have been added by mistake). Using the location of the label, we can discover the cylinder where the partition that holds that LVM PV starts.
Recreating the partition at the correct location will allow LVM2 tools to find the LVM PV and the volume group can be reactivated. If you cannot locate the LVM2 label, this procedure will not be useful to you. It is possible to use the same procedure for other data located on the disk (such as ext3 filesystems).
If the following symptoms are observed, this solution may apply to you:
When scanning for the volume group, it is not found:
# vgchange -an vgtest
Volume group “vgtest” not found
Looking through LVM volume group history (in /etc/lvm/archive/.-), the PV for this volume group used to contain partitions:
$ grep device /etc/lvm/archive/vgtest_00004-313881633.vg
device = “/dev/vdb5″ # Hint only
Device that should contain the LVM PV now does not have any partitions:
Try using parted rescue first as it may be able to detect the start of other partitions on the device and restore the partition table.
If parted rescue does not work, the following procedure can help you restore the partition table using hexdump, try to locate the LVM label on the device that had the partition table removed (in the data below, the location of the LABELONE label is {hex} 0fc08000 bytes into the device):
# hexdump -C /dev/vdb | grep LABELONE
0fc08000 4c 41 42 45 4c 4f 4e 45 01 00 00 00 00 00 00 00 |LABELONE……..|
Converting the byte-address of the LVM2 label to decimal:
0x0fc08000 = 264273920
Run fdisk -l against the device to find out how many bytes per cylinder:
# fdisk -l /dev/vdb
Disk /dev/vdb: 2113 MB, 2113929216 bytes
16 heads, 63 sectors/track, 4096 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes <— 516096 bytes per cylinder
The byte location into the disk that the partition for the LVM PV starts is:
(byte position of LVM label (decimal) = 264273920
number of bytes per cylinder) = 516096
(264273920 / 516096) = 512.063492063 <– round this down to cylinder 512
Add one cylinder because they start at 1, not zero: 512 + 1 = Starting cylinder: 513
Create partition table with a partition starting at cylinder 513:
# fdisk /dev/vdb

Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 4
First cylinder (1-4096, default 1): 513
Last cylinder or +size or +sizeM or +sizeK (513-4096, default 4096):
Using default value 4096
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (513-4096, default 513):
Using default value 513
Last cylinder or +size or +sizeM or +sizeK (513-4096, default 4096): 1024
Command (m for help): p
Disk /dev/vdb: 2113 MB, 2113929216 bytes
16 heads, 63 sectors/track, 4096 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Device Boot Start End Blocks Id System
/dev/vdb4 513 4096 1806336 5 Extended
/dev/vdb5 513 1024 258016+ 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Rescan and activate the volume group:
# pvscan
PV /dev/vdb5 VG vgtest lvm2 [248.00 MB / 0 free]
Total: 1 [8.84 GB] / in use: 1 [8.84 GB] / in no VG: 0 [0 ]
# vgchange -ay vgtest
1 logical volume(s) in volume group “vgtest” now active
--> LVM2 volume group in partial mode with physical volumes marked missing even though they are available in RHEL
Sometimes, attempting to modify a volume group or logical volume fails due to missing devices that are not actually missing:
# lvextend -l+100%PVS /dev/myvg/lv02 /dev/mapper/mpath80
WARNING: Inconsistent metadata found for VG myvg – updating to use version 89
Missing device /dev/mapper/mpath73 reappeared, updating metadata for VG myvg to version 89.
Device still marked missing because of alocated data on it, remove volumes and consider vgreduce –removemissing.
Any attempt to change a VG or LV claims PVs are missing:
Cannot change VG myvg while PVs are missing.
Consider vgreduce –removemissing.
LVM physical volumes are marked with the missing (m) flag in pvs output even though they are healthy and available:
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath24 myvg lvm2 a-m 56.20G 0
Volume group is marked as ‘partial’ and causes lvm commands to fail:
VG #PV #LV #SN Attr VSize VFree
myvg 42 10 0 wz-pn- 2.31T 777.11G
Restore each missing physical volume with:
# vgextend –restoremissing <volume group> <physical volume>
# vgextend –restoremissing myvg /dev/mapper/mpath24
Volume group “myvg” successfully extended
--> Mount LVM partitions on SAN storage connected to a newly-built server?
Scan for Physical Volumes, scan those PVs for Volume Groups, scan those VGs for Logical Volumes, change Volume Groups to active.
# pvscan
# vgscan
# lvscan
# vgchange -ay
Volumes are now ready to mount as per usual with mount command and/or add to /etc/fstab file.
Check Multipath storage is actually available to host:
[root@host ~]# multipath -l
mpath1 (350011c600365270c) dm-8 HP 36.4G,ST336754LC
[size=34G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 0:0:1:0 sda 8:0 [active][undef]
mpath5 (3600508b4001070510000b00001610000) dm-11 HP,HSV300
[size=15G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:3 sde 8:64 [active][undef]
_ 5:0:0:3 sdo 8:224 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:3 sdj 8:144 [active][undef]
_ 5:0:1:3 sdt 65:48 [active][undef]
mpath11 (3600508b4000f314a0000400001600000) dm-13 HP,HSV300
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:5 sdg 8:96 [active][undef]
_ 5:0:0:5 sdq 65:0 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:5 sdl 8:176 [active][undef]
_ 5:0:1:5 sdv 65:80 [active][undef]
mpath4 (3600508b4001070510000b000000d0000) dm-10 HP,HSV300
[size=750G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:2 sdd 8:48 [active][undef]
_ 5:0:0:2 sdn 8:208 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:2 sdi 8:128 [active][undef]
_ 5:0:1:2 sds 65:32 [active][undef]
mpath10 (3600508b4000f314a0000400001090000) dm-12 HP,HSV300
[size=350G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:4 sdf 8:80 [active][undef]
_ 5:0:0:4 sdp 8:240 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:4 sdk 8:160 [active][undef]
_ 5:0:1:4 sdu 65:64 [active][undef]
mpath3 (3600508b4001070510000b000000a0000) dm-9 HP,HSV300
[size=750G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:1 sdc 8:32 [active][undef]
_ 5:0:0:1 sdm 8:192 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:1 sdh 8:112 [active][undef]
_ 5:0:1:1 sdr 65:16 [active][undef]
Check Physical Volumes are being scanned by LVM and seen:
[root@host ~]# pvdisplay
— Physical volume —
PV Name /dev/dm-14
VG Name VolG_CFD
PV Size 750.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 191999
Free PE 0
Allocated PE 191999
PV UUID 0POViC-2Pml-AmfI-W5Mh-s6fC-Ei18-hCOOoJ
Physical volume —
PV Name /dev/dm-13
VG Name VolG_CFD
PV Size 500.00 GB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 127999
Free PE 1
Allocated PE 127998
PV UUID RcDER4-cUwa-sDGF-kieA-44q9-DLm2-1CMOh4
Physical volume —
PV Name /dev/dm-15
VG Name VolG_FEA
PV Size 750.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 191999
Free PE 0
Allocated PE 191999
PV UUID 5DprQD-OOs9-2vxw-MGT1-13Nl-YTTt-BnxGhq
Physical volume —
PV Name /dev/dm-12
VG Name VolG_FEA
PV Size 350.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 89599
Free PE 0
Allocated PE 89599
PV UUID uQIqyq-0PiC-XT2e-J90h-tRBk-Nb8L-MdleF5
Physical volume —
PV Name /dev/dm-11
VG Name vgnbu
PV Size 15.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 3839
Free PE 0
Allocated PE 3839
PV UUID aZNgCY-eRYe-3HmZ-bnAD-4kGN-9DhN-8I5R7D
Physical volume —
PV Name /dev/sdb4
VG Name vg00
PV Size 33.24 GB / not usable 16.86 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 1063
Free PE 0
Allocated PE 1063
PV UUID PKvqoX-hWfx-dQUv-9NCL-Re78-LyIa-we69rm
Physical volume —
PV Name /dev/dm-8
VG Name vg00
PV Size 33.92 GB / not usable 12.89 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 1085
Free PE 111
Allocated PE 974
PV UUID GnjUsb-NxJR-aLgC-fga8-Ct1q-cf89-xhaaFs

No comments :

Post a Comment