最新下载
热门教程
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
XenServer的断电导致系统崩溃而恢复数据的示例
时间:2016-08-08 编辑:简简单单 来源:一聚教程网
由于 办公楼突然断电导致公司内部机房XenServer主机挂了一台,系统无法启动,宕机之前系统配置是
CPU:E5 2630 v3
内存:128GB
存储:3T 西数紫盘5块
阵列卡:LSI 9361-8i
磁盘采取了5块做raid5,当宕机无法启动之后,我采取了增加一块LSI9361-8i的卡和6块3T磁盘(硬件备件的重要性),然后重新做了一组raid5,
在新的raid上重新安装了一个Xenserver系统,进入系统后我发现之前系统上的raid阵列没了分区表没了pv vg lv,突然之间我直接愣比了。。。
但是事情已经出了还是要想办法恢复的,首先要还原分区表,于是开始慢慢的爬坑和尝试了,这里不说走的弯路,直接说解决过程
首先查看当前系统的分区表详情。
[root@xenserver-DS-TestServer1 ~]# sgdisk -p /dev/sdb
Disk /dev/sdb: 29297213440 sectors, 13.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 96873AC9-0736-4EDD-B12C-9C64FA674119
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 29297213406
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 46139392 83888127 18.0 GiB 0700
2 8390656 46139391 18.0 GiB 0700
3 87033856 29297213406 13.6 TiB 8E00
4 83888128 84936703 512.0 MiB EF02
5 2048 8390655 4.0 GiB 0700
6 84936704 87033855 1024.0 MiB 8200
[root@xenserver-DS-TestServer1 ~]#
然后发现,XenServer非常好的动作就是数据存储分区虽然ID不是最后一个但是分区的块是最后一个,那么喜讯来了,我们完全可以借助现有的分区信息恢复之前的分表。
[root@xenserver-DS-TestServer1 ~]# sgdisk -p --new=5:2048:8390655 -t 5:0700 /dev/sda
[root@xenserver-DS-TestServer1 ~]# sgdisk -p --new=2:8390656:46139391 -t 2:0700 /dev/sda
[root@xenserver-DS-TestServer1 ~]# sgdisk -p --new=1:46139392:83888127 -t 1:0700 /dev/sda
[root@xenserver-DS-TestServer1 ~]# sgdisk -p --new=4:83888128:84936703 -t 4:EF02 /dev/sda
[root@xenserver-DS-TestServer1 ~]# sgdisk -p --new=6:84936704:87033855 -t 6:8200 /dev/sda
[root@xenserver-DS-TestServer1 ~]# sgdisk -p --new=3:87033856 -t 3:8E00 /dev/sda
命令解释 由于是大于2T的磁盘做分区所以需要用他来做GPT分区
-p 是打印信息用的
--new=X:起始位置:结束位置 (没有结束位置就是到最后)
-t X:分区类型代码
来看看恢复的分区表是不是正确的
[root@xenserver-DS-TestServer1 ~]# sgdisk -p /dev/sda
Disk /dev/sda: 23437770752 sectors, 10.9 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 589962FD-858D-4789-A026-E08AD53863FF
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 23437770718
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 46139392 83888127 18.0 GiB 0700
2 8390656 46139391 18.0 GiB 0700
3 87033856 23437770718 10.9 TiB 8E00
4 83888128 84936703 512.0 MiB EF02
5 2048 8390655 4.0 GiB 0700
6 84936704 87033855 1024.0 MiB 8200
[root@xenserver-DS-TestServer1 ~]# mkdir /1
[root@xenserver-DS-TestServer1 ~]# mount /dev/sda1 /1
[root@xenserver-DS-TestServer1 ~]# ls /1
1 bin boot cli-rt dev etc EULA home iso_storage lib lib64 lost+found media mnt opt proc Read_Me_First.html root run sbin srv sys tmp usr var
分区表回来了,现在就是要做恢复pv了,由于XenServer在/etc/lvm/backup/下有存放lvm的备份信息,所以直接来操作
[root@xenserver-DS-TestServer1 ~]# ls /etc/lvm/backup/
VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 XSLocalEXT-0d4b3188-1afd-ecf7-44a4-2d594d966b35
[root@xenserver-DS-TestServer1 ~]# grep -A1 pv0 /etc/lvm/backup/VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 |head -3 #取之前PV的ID
pv0 {
id = "XBPMJ4-QTR0-I76k-32jy-UiKR-EaOh-gDu954"
--
[root@xenserver-DS-TestServer1 ~]# pvcreate /dev/sda3 \
-u XBPMJ4-QTR0-I76k-32jy-UiKR-EaOh-gDu954 --restorefile \
/etc/lvm/backup/VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59
[root@xenserver-DS-TestServer1 ~]# vgcfgrestore
[root@xenserver-DS-TestServer1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 lvm2 a-- 10.87t 8.34t
/dev/sdb3 XSLocalEXT-0d4b3188-1afd-ecf7-44a4-2d594d966b35 lvm2 a-- 13.60t 0
[root@xenserver-DS-TestServer1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 1 19 0 wz--n- 10.87t 8.34t
XSLocalEXT-0d4b3188-1afd-ecf7-44a4-2d594d966b35 1 1 0 wz--n- 13.60t 0
[root@xenserver-DS-TestServer1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
MGT VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi-a----- 4.00m
VHD-0dcf2cc7-1065-41db-9e23-148a417fe217 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -ri------- 9.71g
VHD-0f475f75-96e6-40c3-949b-434952569d20 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 15.04g
VHD-18b5ed2b-7c53-43cb-b5f6-6cd7150818fc VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 40.09g
VHD-514a4efe-3b67-419b-a5b4-117758b1bb0b VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -ri------- 2.05g
VHD-52ab4b8e-c2cb-409d-b0cd-2d7d3e83d406 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi-ao---- 8.00m
VHD-5448d7b8-2d34-4c11-8239-cc73cc2c5fcf VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 100.20g
VHD-68c7616e-2255-4b7d-b466-ceff675c3dac VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 40.09g
VHD-70430d5b-81d7-4ef5-97d1-b469ffc68329 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 1001.96g
VHD-ba7fdf40-8949-49e8-9a8d-3cd82e681d05 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 15.04g
VHD-c2489598-8c15-46a8-875d-de330c3a3d45 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 15.04g
VHD-c9765741-c451-4a58-b90a-b14529b5037b VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 15.04g
VHD-d5bf4179-b9e5-4286-a94d-88c42cb803c4 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 100.20g
VHD-db9b028f-9490-40cb-9d14-3099753db6e3 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -ri-ao---- 108.68g
VHD-de709c9b-12c0-4854-a98d-79868b4a2c2e VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi-ao---- 751.47g
VHD-e89b9b3d-cc07-456e-bcd0-7630c5d41224 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 200.40g
VHD-f680824c-d00a-4233-bbdc-6b935930b5ab VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi------- 40.09g
VHD-fcc379ee-b7cb-4b47-b349-4fea5b1b75b5 VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi-ao---- 120.24g
local_iso VG_XenStorage-275f0172-dec8-aa20-81a5-5e50cd637f59 -wi-a----- 20.00g
0d4b3188-1afd-ecf7-44a4-2d594d966b35 XSLocalEXT-0d4b3188-1afd-ecf7-44a4-2d594d966b35 -wi-ao---- 13.60t
[root@xenserver-DS-TestServer1 ~]#
到了这是分区信息都回来,但是这才是开始,最重要的是群让真实数据能不能回来,继续
[root@xenserver-DS-TestServer1 ~]# cat /proc/partitions
major minor #blocks name
7 0 56104 loop0
8 0 11718885376 sda
8 1 18874368 sda1
8 2 18874368 sda2
8 3 11675368431 sda3
8 4 524288 sda4
8 5 4194304 sda5
8 6 1048576 sda6
8 16 14648606720 sdb
8 17 18874368 sdb1
8 18 18874368 sdb2
8 19 14605089775 sdb3
8 20 524288 sdb4
8 21 4194304 sdb5
8 22 1048576 sdb6
253 15 20971520 dm-15
253 0 14605074432 dm-0
253 1 4096 dm-1
253 2 787976192 dm-2
254 0 786432000 tda
253 4 113958912 dm-4
254 4 125829120 tde
253 8 126083072 dm-8
253 11 8192 dm-11
254 6 125829120 tdg
[root@xenserver-DS-TestServer1 ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 11 Aug 8 03:58 dm-name-VG_XenStorage--275f0172--dec8--aa20--81a5--5e50cd637f59-local_iso -> ../../dm-15
lrwxrwxrwx 1 root root 10 Aug 8 04:10 dm-name-VG_XenStorage--275f0172--dec8--aa20--81a5--5e50cd637f59-MGT -> ../../dm-1
lrwxrwxrwx 1 root root 11 Aug 8 06:35 dm-name-VG_XenStorage--275f0172--dec8--aa20--81a5--5e50cd637f59-VHD--52ab4b8e--c2cb--409d--b0cd--2d7d3e83d406 -> ../../dm-11
lrwxrwxrwx 1 root root 10 Aug 8 06:35 dm-name-VG_XenStorage--275f0172--dec8--aa20--81a5--5e50cd637f59-VHD--db9b028f--9490--40cb--9d14--3099753db6e3 -> ../../dm-4
lrwxrwxrwx 1 root root 10 Aug 8 06:30 dm-name-VG_XenStorage--275f0172--dec8--aa20--81a5--5e50cd637f59-VHD--de709c9b--12c0--4854--a98d--79868b4a2c2e -> ../../dm-2
lrwxrwxrwx 1 root root 10 Aug 8 06:35 dm-name-VG_XenStorage--275f0172--dec8--aa20--81a5--5e50cd637f59-VHD--fcc379ee--b7cb--4b47--b349--4fea5b1b75b5 -> ../../dm-8
lrwxrwxrwx 1 root root 10 Aug 8 04:08 dm-name-XSLocalEXT--0d4b3188--1afd--ecf7--44a4--2d594d966b35-0d4b3188--1afd--ecf7--44a4--2d594d966b35 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Aug 8 06:35 dm-uuid-LVM-cB4XlG6peENnKKsK5GMqts3eoYCo0OQE54qZ1ihUCdsQzGv4IX8yRhLQS6iOazUu -> ../../dm-8
lrwxrwxrwx 1 root root 10 Aug 8 06:30 dm-uuid-LVM-cB4XlG6peENnKKsK5GMqts3eoYCo0OQE9ojbv4bKtZGXEtePHxe1o7N6t40c89hL -> ../../dm-2
lrwxrwxrwx 1 root root 10 Aug 8 04:10 dm-uuid-LVM-cB4XlG6peENnKKsK5GMqts3eoYCo0OQEG8pbhDHsVhGuXyTCKXzBj52T75eI0nS6 -> ../../dm-1
lrwxrwxrwx 1 root root 11 Aug 8 06:35 dm-uuid-LVM-cB4XlG6peENnKKsK5GMqts3eoYCo0OQEICcxLBeQZj1kSExK2tpMvybRiFWUPy21 -> ../../dm-11
lrwxrwxrwx 1 root root 11 Aug 8 03:58 dm-uuid-LVM-cB4XlG6peENnKKsK5GMqts3eoYCo0OQErJH0l0NRX7RcgMcnOioxqcDsLklo2fD3 -> ../../dm-15
lrwxrwxrwx 1 root root 10 Aug 8 06:35 dm-uuid-LVM-cB4XlG6peENnKKsK5GMqts3eoYCo0OQEUAvD8Ndo1on9yX3bfOxn16ijjssrsYYI -> ../../dm-4
lrwxrwxrwx 1 root root 10 Aug 8 04:08 dm-uuid-LVM-TGxgVKcz2NC899CJrQrB6qV55Uyc5kR2TDTlQyso0Ww4ymeZIVeCpmi7uuWpFa85 -> ../../dm-0
lrwxrwxrwx 1 root root 9 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385 -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385-part4 -> ../../sdb4
lrwxrwxrwx 1 root root 10 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385-part5 -> ../../sdb5
lrwxrwxrwx 1 root root 10 Aug 8 06:24 scsi-3600605b000b393401f3a02470682c385-part6 -> ../../sdb6
lrwxrwxrwx 1 root root 9 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2 -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Aug 8 06:42 scsi-3600605b000b394e01f20a4c6273ed3e2-part6 -> ../../sda6
lrwxrwxrwx 1 root root 9 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385 -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385-part4 -> ../../sdb4
lrwxrwxrwx 1 root root 10 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385-part5 -> ../../sdb5
lrwxrwxrwx 1 root root 10 Aug 8 06:24 wwn-0x600605b000b393401f3a02470682c385-part6 -> ../../sdb6
lrwxrwxrwx 1 root root 9 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2 -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Aug 8 06:42 wwn-0x600605b000b394e01f20a4c6273ed3e2-part6 -> ../../sda6
[root@xenserver-DS-TestServer1 ~]#
[root@xenserver-DS-TestServer1 ~]# xe host-list
uuid ( RO) : 3da0c2a0-e4c5-44ba-bb20-77a2b51198e3
name-label ( RW): xenserver-DS-TestServer1
name-description ( RW): Default install
[root@xenserver-DS-TestServer1 ~]#
[root@xenserver-DS-TestServer1 ~]# xe sr-create content-type=user \
device-config:device=/dev/disk/by-id/wwn-0x600605b000b394e01f20a4c6273ed3e2-part3 \
host-uuid=3da0c2a0-e4c5-44ba-bb20-77a2b51198e3 name-label="Local storage 2" shared=false type=lvm
这时候惊喜的发现之前的vm分区都回来,但是也有不完美的地方,如下图