特别说明
- 虽然是进行单节点部署,但只要修改主机清单文件,在其他配置基本不变的情况下,也可以多节点部署。
- 这里使用了两个网卡,除了Neutron的虚拟网络在eth1上以外,其他的所有服务均放在eth0上。
- 由于eth1被Neutron放在了qrouter-XXX的网络NameSpace中,所以宿主机无法直接通过eth1进行通信。
- 要想使用eth1的网络,必须使用“ip netns exec qrouter-XXX”命令执行相关命令,或者建立一个网桥并绑定eth1和一个虚拟网卡,把虚拟网卡给Neutron使用,宿主机通过网桥使用eth1。
- 由于只有一个节点,所以禁用了"HAProxy"服务,并把所有VIP都设置为eth0的IP。
基本环境
-
在VMware14中安装CentOS 7 X64 1708发行版,最好是重新安装一个精简干净的系统;
-
系统内存4G,空闲硬盘空间30G,内核版本为“3.10.0-693.11.6.el7.x86_64”;
-
有两个网卡,eth0为“192.168.195.170”,eth1为“192.168.162.170”;
-
修改“/etc/hosts”,在其中增加“192.168.195.170 controller”。
版本要求
官方对安装Pike版本相关的软件包版本要求如下:
Component Min Version Max Version Comment Ansible 2.2.0 none On deployment host Docker 1.10.0 none On target nodes Docker Python 2.0.0 none On target nodes Python Jinja2 2.8.0 none On deployment host
系统服务配置
$ systemctl enable ntpd.service && systemctl start ntpd.service && systemctl status ntpd.service
$ systemctl stop libvirtd.service && systemctl disable libvirtd.service && systemctl status libvirtd.service
$ systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
安装和配置Docker服务
安装软件包
- 如果有的话,卸载旧的Docker,否则可能会不兼容:
$ yum remove -y docker docker-io docker-selinux python-docker-py
$ vi /etc/yum.repos.d/docker.repo [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg
$ yum update $ yum install -y epel-release $ yum install -y docker-engine docker-engine-selinux
配置国内镜像
- 使用阿里的Docker镜像服务(也可以自己去申请一个地址):
$ mkdir -p /etc/docker $ vi /etc/docker/daemon.json { "registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"] }
$ systemctl daemon-reload && systemctl enable docker && systemctl restart docker && systemctl status docker
$ docker run --rm hello-world Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://cloud.docker.com/ For more examples and ideas, visit: https://docs.docker.com/engine/userguide/
配置本地Registry服务
$ docker run -d --name registry --restart=always -p 4000:5000 -v /opt/registry:/var/lib/registry registry:2
- 修改Docker服务配置,信任本地Registry服务:
$ vi /usr/lib/systemd/system/docker.service ... #ExecStart=/usr/bin/dockerd ExecStart=/usr/bin/dockerd --insecure-registry controller:4000 ...
$ systemctl daemon-reload && systemctl restart docker
$ curl -X GET http://controller:4000/v2/_catalog {"repositories":[]}
配置Kolla-Ansible的Docker选项
$ mkdir -pv /etc/systemd/system/docker.service.d $ vi /etc/systemd/system/docker.service.d/kolla.conf [Service] MountFlags=shared
$ systemctl daemon-reload && systemctl restart docker && systemctl status docker
导入Kolla-Ansible的Docker镜像
- 下载OpenStack Pike的Docker镜像:
$ wget http://tarballs.openstack.org/kolla/images/centos-source-registry-pike.tar.gz
- 解压OpenStack Pike的Docker镜像:
$ tar zxvf centos-source-registry-pike.tar.gz -C /opt/registry/
- 检查Docker镜像是否已经加入到Registry服务中(如果不能看到如下信息,则重启Registry容器):
$ curl -X GET http://controller:4000/v2/_catalog {"repositories": ["lokolla/centos-source-aodh-api", "lokolla/centos-source-aodh-base", "lokolla/centos-source-aodh-evaluator", "lokolla/centos-source-aodh-expirer", "lokolla/centos-source-aodh-listener", "lokolla/centos-source-aodh-notifier", "lokolla/centos-source-barbican-api", "lokolla/centos-source-barbican-base", "lokolla/centos-source-barbican-keystone-listener", "lokolla/centos-source-barbican-worker", "lokolla/centos-source-base", "lokolla/centos-source-bifrost-base", "lokolla/centos-source-bifrost-deploy", "lokolla/centos-source-blazar-api", "lokolla/centos-source-blazar-base", "lokolla/centos-source-blazar-manager", "lokolla/centos-source-ceilometer-api", "lokolla/centos-source-ceilometer-base", "lokolla/centos-source-ceilometer-central", "lokolla/centos-source-ceilometer-collector", "lokolla/centos-source-ceilometer-compute", "lokolla/centos-source-ceilometer-ipmi", "lokolla/centos-source-ceilometer-notification", "lokolla/centos-source-ceph-base", "lokolla/centos-source-ceph-mds", "lokolla/centos-source-ceph-mon", "lokolla/centos-source-ceph-osd", "lokolla/centos-source-ceph-rgw", "lokolla/centos-source-cephfs-fuse", "lokolla/centos-source-chrony", "lokolla/centos-source-cinder-api", "lokolla/centos-source-cinder-backup", "lokolla/centos-source-cinder-base", "lokolla/centos-source-cinder-scheduler", "lokolla/centos-source-cinder-volume", "lokolla/centos-source-cloudkitty-api", "lokolla/centos-source-cloudkitty-base", "lokolla/centos-source-cloudkitty-processor", "lokolla/centos-source-collectd", "lokolla/centos-source-congress-api", "lokolla/centos-source-congress-base", "lokolla/centos-source-congress-datasource", "lokolla/centos-source-congress-policy-engine", "lokolla/centos-source-cron", "lokolla/centos-source-designate-api", "lokolla/centos-source-designate-backend-bind9", "lokolla/centos-source-designate-base", "lokolla/centos-source-designate-central", "lokolla/centos-source-designate-mdns", "lokolla/centos-source-designate-pool-manager", "lokolla/centos-source-designate-sink", "lokolla/centos-source-designate-worker", "lokolla/centos-source-dind", "lokolla/centos-source-dnsmasq", "lokolla/centos-source-dragonflow-base", "lokolla/centos-source-dragonflow-controller", "lokolla/centos-source-dragonflow-metadata", "lokolla/centos-source-dragonflow-publisher-service", "lokolla/centos-source-ec2-api", "lokolla/centos-source-elasticsearch", "lokolla/centos-source-etcd"," "lokolla/centos-source-fluentd", "lokolla/centos-source-freezer-api", "lokolla/centos-source-freezer-base", "lokolla/centos-source-glance-api", "lokolla/centos-source-glance-base", "lokolla/centos-source-glance-registry", "lokolla/centos-source-gnocchi-api", "lokolla/centos-source-gnocchi-base", "lokolla/centos-source-gnocchi-metricd", "lokolla/centos-source-gnocchi-statsd", "lokolla/centos-source-grafana", "lokolla/centos-source-haproxy", "lokolla/centos-source-heat-all", "lokolla/centos-source-heat-api", "lokolla/centos-source-heat-api-cfn", "lokolla/centos-source-heat-api-cloudwatch", "lokolla/centos-source-heat-base", "lokolla/centos-source-heat-engine", "lokolla/centos-source-helm-repository", "lokolla/centos-source-horizon", "lokolla/centos-source-influxdb", "lokolla/centos-source-ironic-api", "lokolla/centos-source-ironic-base", "lokolla/centos-source-ironic-conductor", "lokolla/centos-source-ironic-inspector", "lokolla/centos-source-ironic-pxe", "lokolla/centos-source-iscsid", "lokolla/centos-source-karbor-api", "lokolla/centos-source-karbor-base", "lokolla/centos-source-karbor-operationengine", "lokolla/centos-source-karbor-protection", "lokolla/centos-source-keepalived", "lokolla/centos-source-keystone", "lokolla/centos-source-keystone-base", "lokolla/centos-source-keystone-fernet", "lokolla/centos-source-keystone-ssh", "lokolla/centos-source-kibana", "lokolla/centos-source-kolla-toolbox", "lokolla/centos-source-kube-apiserver-amd64"]}
- 如果无法列出上述镜像,则重启一下Registry容器:
$ docker restart registry
安装和配置Kolla-Ansible
安装Kolla-Ansible
$ yum install -y python-devel python-pip libffi-devel gcc openssl-devel git
- 如果后面出现urllib3错误,则把下面的命令执行两次(出现的命令行交互中选择“y”):
$ pip uninstall urllib3 && pip install urllib3
$ pip install -U pip
- 如果有docker-py,则卸载掉,否则可能后面会报错:
$ pip uninstall docker-py
$ pip install shade docker kolla ansible kolla-ansible
配置Kolla-Ansible
$ mkdir -pv /opt/kolla/config $ cd /opt/kolla/config $ cp -rv /usr/share/kolla-ansible/ansible/inventory/* .
$ mkdir -pv /etc/kolla/ $ cp -rv /usr/share/kolla-ansible/etc_examples/kolla/* /etc/kolla/
- 配置Nova,由于是在虚拟机里,所以使用qemu,而不是kvm:
$ mkdir -pv /etc/kolla/config/nova $ vi /etc/kolla/config/nova/nova-compute.conf [libvirt] virt_type=qemu cpu_mode = none
$ kolla-genpwd
- 修改其中的admin密码,后面的Web页面会用到:
$ vi /etc/kolla/passwords.yml ... keystone_admin_password: admin ...
$ vi /etc/kolla/globals.yml ... docker_registry: "controller:4000" docker_namespace: "lokolla" kolla_install_type: "source" openstack_release: "5.0.1" ... kolla_internal_vip_address: "192.168.195.170" network_interface: "eth0" neutron_external_interface: "eth1" ... enable_haproxy: "no" ...
$ ssh-keygen $ ssh-copy-id -i ~/.ssh/id_rsa.pub root@controller
$ /opt/kolla/config/ $ vi all-in-one ... [control] #localhost ansible_connection=local controller [network] #localhost ansible_connection=local controller [compute] #localhost ansible_connection=local controller [storage] #localhost ansible_connection=local controller [monitoring] #localhost ansible_connection=local controller [deployment] #localhost ansible_connection=local controller ...
部署OpenStack
$ kolla-ansible prechecks -i all-in-one
$ kolla-ansible pull -i all-in-one
$ kolla-ansible deploy -i all-in-one
$ kolla-ansible post-deploy -i all-in-one $ cat /etc/kolla/admin-openrc.sh export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://192.168.195.170:35357/v3 export OS_INTERFACE=internal export OS_IDENTITY_API_VERSION=3 export OS_REGION_NAME=RegionOne
- 如果想清理掉部署好的OpenStack,则执行如下命令:
$ kolla-ansible destroy -i all-in-one --yes-i-really-really-mean-it
验证OpenStack
初始化基本环境
$ pip install python-openstackclient $ pip install python-neutronclient $ which openstack /usr/bin/openstack
$ . /etc/kolla/admin-openrc.sh
$ vi /usr/share/kolla-ansible/init-runonce ... EXT_NET_CIDR='192.168.162.0/24' EXT_NET_RANGE='start=192.168.162.50,end=192.168.162.100' EXT_NET_GATEWAY='192.168.162.1' ...
$ /usr/share/kolla-ansible/init-runonce
$ openstack router list +--------------------------------------+-------------+--------+-------+-------------+-------+----------------------------------+ | ID | Name | Status | State | Distributed | HA | Project | +--------------------------------------+-------------+--------+-------+-------------+-------+----------------------------------+ | db705217-8d02-4a02-a172-8f604ed24686 | demo-router | ACTIVE | UP | False | False | d888f922844e4e45822969bf9f7d5494 | +--------------------------------------+-------------+--------+-------+-------------+-------+----------------------------------+ $ openstack router show demo-router +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2018-01-18T01:58:39Z | | description | | | distributed | False | | external_gateway_info | {"network_id": "625bc00d-cbc5-40ed-9821-0a1768d6737f", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "8a078bf2-ebc7-423b-90b4-c2bcf2abfffb", "ip_address": "192.168.162.53"}]} | | flavor_id | None | | ha | False | | id | db705217-8d02-4a02-a172-8f604ed24686 | | interfaces_info | [{"subnet_id": "464e9329-6140-4471-b421-1b5bc48cf567", "ip_address": "10.0.0.1", "port_id": "15509c1e-7706-4e3b-bb24-1c5eb862b1a6"}] | | name | demo-router | | project_id | d888f922844e4e45822969bf9f7d5494 | | revision_number | 4 | | routes | | | status | ACTIVE | | tags | | | updated_at | 2018-01-18T01:59:01Z | +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
$ openstack network list +--------------------------------------+----------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------+--------------------------------------+ | 2e2e4e65-1661-45a3-af53-04cc5c2838e6 | demo-net | 464e9329-6140-4471-b421-1b5bc48cf567 | | 625bc00d-cbc5-40ed-9821-0a1768d6737f | public1 | 8a078bf2-ebc7-423b-90b4-c2bcf2abfffb | +--------------------------------------+----------+--------------------------------------+ $ openstack subnet list +--------------------------------------+----------------+--------------------------------------+------------------+ | ID | Name | Network | Subnet | +--------------------------------------+----------------+--------------------------------------+------------------+ | 464e9329-6140-4471-b421-1b5bc48cf567 | demo-subnet | 2e2e4e65-1661-45a3-af53-04cc5c2838e6 | 10.0.0.0/24 | | 8a078bf2-ebc7-423b-90b4-c2bcf2abfffb | public1-subnet | 625bc00d-cbc5-40ed-9821-0a1768d6737f | 192.168.162.0/24 | +--------------------------------------+----------------+--------------------------------------+------------------+
$ openstack subnet show public1-subnet +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.162.50-192.168.162.100 | | cidr | 192.168.162.0/24 | | created_at | 2018-01-18T01:58:26Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 192.168.162.1 | | host_routes | | | id | 8a078bf2-ebc7-423b-90b4-c2bcf2abfffb | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | public1-subnet | | network_id | 625bc00d-cbc5-40ed-9821-0a1768d6737f | | project_id | d888f922844e4e45822969bf9f7d5494 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2018-01-18T01:58:26Z | +-------------------+--------------------------------------+ $ openstack subnet show demo-subnet +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 10.0.0.2-10.0.0.254 | | cidr | 10.0.0.0/24 | | created_at | 2018-01-18T01:58:33Z | | description | | | dns_nameservers | 8.8.8.8 | | enable_dhcp | True | | gateway_ip | 10.0.0.1 | | host_routes | | | id | 464e9329-6140-4471-b421-1b5bc48cf567 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | demo-subnet | | network_id | 2e2e4e65-1661-45a3-af53-04cc5c2838e6 | | project_id | d888f922844e4e45822969bf9f7d5494 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2018-01-18T01:58:33Z | +-------------------+--------------------------------------+
创建虚拟机
$ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano +----------------------------+---------+ | Field | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | id | 0 | | name | m1.nano | | os-flavor-access:is_public | True | | properties | | | ram | 64 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------+
$ openstack flavor list +----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+-----------+-------+------+-----------+-------+-----------+ | 0 | m1.nano | 64 | 1 | 0 | 1 | True | | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +----+-----------+-------+------+-----------+-------+-----------+
- 创建并启动虚拟机(如果有多个节点,可以使用类似“--availability-zone nova:compute01”的参数手动指定虚拟机在compute01节点上创建 ):
$ openstack server create --image cirros --flavor m1.nano --key-name mykey --nic net-id=2e2e4e65-1661-45a3-af53-04cc5c2838e6 demo1 +-------------------------------------+-----------------------------------------------+ | Field | Value | +-------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | vjqwZUjeo2xJ | | config_drive | | | created | 2018-01-18T02:05:23Z | | flavor | m1.nano (1) | | hostId | | | id | c7de4969-15d3-4008-b33e-39d9918c3d3e | | image | cirros (9f37578a-d1ca-478e-b0aa-baa9f768b271) | | key_name | mykey | | name | demo1 | | progress | 0 | | project_id | d888f922844e4e45822969bf9f7d5494 | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2018-01-18T02:05:23Z | | user_id | 706d089591be428e9b71ab1d9ebb0ec5 | | volumes_attached | | +-------------------------------------+-----------------------------------------------+
$ openstack server show demo1 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | CentOS7-LR | | OS-EXT-SRV-ATTR:hypervisor_hostname | CentOS7-LR | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2018-01-18T02:05:45.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | demo-net=10.0.0.9 | | config_drive | | | created | 2018-01-18T02:05:23Z | | flavor | m1.nano (1) | | hostId | 4cc6aee5e2276b69f1a1f80bb213686d15b0db4825fb4e052c36c0e2 | | id | c7de4969-15d3-4008-b33e-39d9918c3d3e | | image | cirros (9f37578a-d1ca-478e-b0aa-baa9f768b271) | | key_name | mykey | | name | demo1 | | progress | 0 | | project_id | d888f922844e4e45822969bf9f7d5494 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | 2018-01-18T02:05:46Z | | user_id | 706d089591be428e9b71ab1d9ebb0ec5 | | volumes_attached | | +-------------------------------------+----------------------------------------------------------+
通过Web访问虚拟机
$ openstack console url show demo1 +-------+--------------------------------------------------------------------------------------+ | Field | Value | +-------+--------------------------------------------------------------------------------------+ | type | novnc | | url | http://192.168.195.170:6080/vnc_auto.html?token=1a7c224c-4e17-4b88-8568-3062377ebf56 | +-------+--------------------------------------------------------------------------------------+
Horizon的admin用户密码为“admin”,虚拟机的用户名是"cirros",密码是"cubswin:)"。
通过控制台访问虚拟机
- 申请一个"Floating IP"(也可以使用“--floating-ip-address”参数指定要申请的IP):
$ openstack floating ip create public1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2018-01-20T07:00:02Z | | description | | | fixed_ip_address | None | | floating_ip_address | 192.168.162.50 | | floating_network_id | 3ba4cc17-f7de-4a3a-b924-c2d5c7f877dc | | id | 28b6b31c-485b-486c-9b1e-f747b3ebdbaa | | name | 192.168.162.50 | | port_id | None | | project_id | c4f85d20c16a4e0eb0a43e6cb6e52a34 | | revision_number | 0 | | router_id | None | | status | DOWN | | updated_at | 2018-01-20T07:00:02Z | +---------------------+--------------------------------------+
- 把这个“Floating IP”添加给demo1虚拟机:
$ openstack server add floating ip demo1 192.168.162.50
$ openstack server list +--------------------------------------+-------+--------+------------------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+------------------------------------+--------+---------+ | 54bc1b6b-7930-4d5e-a661-b414c1eb4a2e | demo1 | ACTIVE | demo-net=10.0.0.9, 192.168.162.50 | cirros | m1.nano | +--------------------------------------+-------+--------+------------------------------------+--------+---------+
$ ip netns qrouter-8ae967a2-72e6-4b6a-a7f8-a2349e4aa0d1 qdhcp-331baf0c-f09d-44a0-a59f-74372ee2da95
# 从路由的NameSpace测试虚拟机在"public1"网络的IP。 $ ip netns exec qrouter-8ae967a2-72e6-4b6a-a7f8-a2349e4aa0d1 ping 192.168.162.50 # 从DHCP的NameSpace测试虚拟机在"demo-net1"子网的IP。 $ ip netns exec qdhcp-331baf0c-f09d-44a0-a59f-74372ee2da95 ping 10.0.0.9
- 通过“Floating IP”访问虚拟机(用户名"cirros",密码"cubswin:)"):
$ ip netns exec qrouter-8ae967a2-72e6-4b6a-a7f8-a2349e4aa0d1 ssh cirros@192.168.162.50
$ ip netns exec qdhcp-331baf0c-f09d-44a0-a59f-74372ee2da95 ssh cirros@10.0.0.9
- 通过“neutron-openvswitch-agent”容器访问虚拟机:
$ docker exec -it neutron_openvswitch_agent bash $ ssh cirros@192.168.162.50