测试环境请参考之前的文章。
安装软件包(Test1和Test2节点)
$ yum install -y qemu-kvm qemu-img virt-manager libvirt $ yum install -y libvirt-python python-virtinst libvirt-client $ yum install -y virt-install virt-viewer libvirt-lock-sanlock
挂载NFS目录(Test1和Test2节点)
$ vi /etc/fstab #192.168.195.131:/mnt/nfs /mnt/nfs nfs defaults 1 1
$ echo "192.168.195.131:/mnt/nfs /var/lib/libvirt/sanlock nfs hard,nointr 0 0" >> /etc/fstab
$ umount /mnt/nfs $ mount /var/lib/libvirt/sanlock
$ chown -R sanlock:sanlock /var/lib/libvirt/sanlock
说明
这里也可以修改libvirt中sanlock租约的路径为之前的NFS目录,而不用修改之前的mount路径:
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/disk_lease_dir "/mnt/nfs"
配置Host ID
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/host_id 1
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/host_id 2
修改libvirt的sanlock配置(Test1和Test2节点)
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/auto_disk_leases 1
此选项会使libvirt在disk_lease_dir目录(默认为"/var/lib/libvirt/sanlock")根据磁盘的“路径”的MD5值创建Resources租约。
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/group sanlock $ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/user sanlock
$ augtool -s set /files/etc/libvirt/qemu.conf/lock_manager sanlock
$ systemctl restart libvirtd.service
- 检查平配置是否成功,如果成功则会在NFS根目录看到__LIBVIRT__DISKS__文件。
$ ls /var/lib/libvirt/sanlock/ __LIBVIRT__DISKS__
准备虚拟机镜像(Test1或Test2节点)
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img $ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/mapper/raw
$ yum install -y tigervnc $ qemu-system-x86_64 -vnc :2 -monitor stdio -hda /dev/mapper/raw -m 256M $ vncviewer :2
最后关闭虚拟机。
创建虚拟机(Test1和Test2节点)
$ vi test_sanlock.xml
<domain type='kvm'> <name>test_sanlock</name> <memory>262144</memory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/dev/mapper/raw'/> <target dev='hda' bus='ide'/> </disk> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' listen = '0.0.0.0' autoport='yes' keymap='en-us'/> </devices> </domain>
$ virt-xml-validate test_sanlock.xml $ virsh define test_sanlock.xml $ virsh list --all
在两节点上同时使用相同磁盘(/dev/mapper/raw)的虚拟机
- 在Test1节点上运行虚拟机。
$ virsh start test_sanlock 域 test_sanlock 已开始
$ virsh vncdisplay test_sanlock $ vncviewer :0
$ ls -l /var/lib/libvirt/sanlock/ -rw------- 1 sanlock sanlock 1048576 4月 20 16:24 7edb5b6820e56426339607637d18e871 -rw-rw---- 1 sanlock sanlock 1048576 4月 20 16:35 __LIBVIRT__DISKS__
$ sanlock direct dump /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__ offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 322ee34a-aa6e-4224-971f-5612072ca6c0.Test1 0000018489 0001 0001 00000512 __LIBVIRT__DISKS__ d9441ec2-e9a1-4657-add9-728148e11f40.Test2 0000018468 0002 0001 $ sanlock direct dump /var/lib/libvirt/sanlock/7edb5b6820e56426339607637d18e871 offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 7edb5b6820e56426339607637d18e871 0000017710 0001 0001 1
- 在Test2节点上运行虚拟机。
直接启动时,无法启动虚拟机:
$ virsh start test_sanlock 错误:开始域 test_sanlock 失败 错误:resource busy: 请求锁失败:错误 -243
- 关闭Test1节点上的虚拟机后,再次在Test2上启动虚拟机。
$ virsh start test_sanlock 域 test_sanlock 已开始
启动成功。
$ sanlock direct dump /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__ offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 322ee34a-aa6e-4224-971f-5612072ca6c0.Test1 0000018694 0001 0001 00000512 __LIBVIRT__DISKS__ d9441ec2-e9a1-4657-add9-728148e11f40.Test2 0000018673 0002 0001 $ sanlock direct dump /var/lib/libvirt/sanlock/7edb5b6820e56426339607637d18e871 offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 7edb5b6820e56426339607637d18e871 0000018630 0002 0001 2
磁盘的所有权已经被Test2获取。
- 再次再Test1上启动虚拟机。
$ virsh start test_sanlock 错误:开始域 test_sanlock 失败 错误:resource busy: 请求锁失败:错误 -243
启动失败,符合预期。
在共享存储的LVM卷上测试多个磁盘设备
创建libvirt的LVM存储池(Test1或Test2节点)
$ vi pool_sanlock.xml
<pool type="logical"> <name>storage</name> <source> <device path="/dev/mapper/lvm"/> </source> <target> <path>/storage</path> </target> </pool>
$ virsh pool-define pool_sanlock.xml $ virsh pool-list --all 名称 状态 自动开始 ------------------------------------------- storage 不活跃 否
$ virsh pool-build storage 构建池 storage $ vgdisplay --- Volume group --- VG Name storage System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 100.00 GiB PE Size 4.00 MiB Total PE 25599 Alloc PE / Size 0 / 0 Free PE / Size 25599 / 100.00 GiB VG UUID AC7lkm-ve65-Wy87-WTzU-BESp-tw2U-s0FviB
$ virsh pool-start storage 池 storage 已启动 $ pool-list --all 名称 状态 自动开始 ------------------------------------------- storage 活动 否
$ virsh vol-create-as --pool storage --name test1 --capacity 500M $ virsh vol-create-as --pool storage --name test2 --capacity 500M $ virsh vol-create-as --pool storage --name test3 --capacity 500M $ virsh vol-create-as --pool storage --name test4 --capacity 500M
$ pvscan --cache $ lvchange -ay storage
$ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test1 $ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test2 $ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test3 $ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test4
创建虚拟机(Test1和Test2节点)
$ vi test1_sanlock.xml
<domain type='kvm'> <name>test1_sanlock</name> <memory>262144</memory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/dev/storage/test1'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/dev/storage/test2'/> <target dev='hdb' bus='ide'/> </disk> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' listen = '0.0.0.0' autoport='yes' keymap='en-us'/> </devices> </domain>
$ vi test2_sanlock.xml
<domain type='kvm'> <name>test2_sanlock</name> <memory>262144</memory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/dev/storage/test3'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/dev/storage/test4'/> <target dev='hdb' bus='ide'/> </disk> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' listen = '0.0.0.0' autoport='yes' keymap='en-us'/> </devices> </domain>
$ virt-xml-validate test1_sanlock.xml test1_sanlock.xml validates $ virt-xml-validate test2_sanlock.xml test2_sanlock.xml validates $ virsh define test1_sanlock.xml 定义域 test1_sanlock(从 test1_sanlock.xml) $ virsh define test2_sanlock.xml 定义域 test2_sanlock(从 test2_sanlock.xml) $ virsh list --all Id 名称 状态 ---------------------------------------------------- - test1_sanlock 关闭 - test2_sanlock 关闭 - test_sanlock 关闭
运行虚拟机
$ virsh start test1_sanlock 域 test1_sanlock 已开始 $ virsh start test2_sanlock 域 test2_sanlock 已开始
启动成功。
$ ll /var/lib/libvirt/sanlock/ -rw------- 1 sanlock sanlock 1048576 4月 20 19:55 1199371e4095b4aeb587631d5e61ea06 -rw------- 1 sanlock sanlock 1048576 4月 20 19:54 3f8518ff5358a6757f0a5918e3ec7be2 -rw------- 1 sanlock sanlock 1048576 4月 20 19:55 51d27a61a6a3dd58637b6e00bb719cae -rw------- 1 sanlock sanlock 1048576 4月 20 19:54 6cfae8f6c4541a92e4aa52f14e9977a5 -rw------- 1 sanlock sanlock 1048576 4月 20 16:40 7edb5b6820e56426339607637d18e871 -rw-rw---- 1 sanlock sanlock 1048576 4月 20 19:56 __LIBVIRT__DISKS__
$ sanlock direct dump /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__ offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 322ee34a-aa6e-4224-971f-5612072ca6c0.Test1 0000030443 0001 0001 00000512 __LIBVIRT__DISKS__ d9441ec2-e9a1-4657-add9-728148e11f40.Test2 0000030419 0002 0001 $ direct dump /var/lib/libvirt/sanlock/1199371e4095b4aeb587631d5e61ea06 offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 1199371e4095b4aeb587631d5e61ea06 0000030379 0001 0001 1 $ sanlock direct dump /var/lib/libvirt/sanlock/3f8518ff5358a6757f0a5918e3ec7be2 offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 3f8518ff5358a6757f0a5918e3ec7be2 0000030322 0001 0001 1 $ sanlock direct dump /var/lib/libvirt/sanlock/51d27a61a6a3dd58637b6e00bb719cae offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 51d27a61a6a3dd58637b6e00bb719cae 0000030379 0001 0001 1 $ sanlock direct dump /var/lib/libvirt/sanlock/6cfae8f6c4541a92e4aa52f14e9977a5 offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 6cfae8f6c4541a92e4aa52f14e9977a5 0000030322 0001 0001 1 # 这个是/dev/mapper/raw的锁,目前运行在Test2节点上。 sanlock direct dump /var/lib/libvirt/sanlock/7edb5b6820e56426339607637d18e871 offset lockspace resource timestamp own gen lver 00000000 __LIBVIRT__DISKS__ 7edb5b6820e56426339607637d18e871 0000018630 0002 0001 2
$ virsh start test1_sanlock 错误:开始域 test1_sanlock 失败 错误:resource busy: 请求锁失败:错误 -243 $ virsh start test2_sanlock 错误:开始域 test2_sanlock 失败 错误:resource busy: 请求锁失败:错误 -243
运行失败,符合预期。
总结
- 所有libvirt的所有资源全部在__LIBVIRT__DISKS__这个Lockspace中,且一个磁盘文件对应一个锁,占用1M空间。