OpenStack-Pike安装
     分类:云平台     有: 0 条评论

OpenStack-Pike安装

     分类:云平台     有: 0 条评论

Docker部署


真机部署

OpenStack Pike版部署

Pike版加入了新功能支持容器编排。部署按顺序走,水平有限,还没做写成自动脚本一键部署,以后有机会尝试一下。以下是多机部署。

配置的低要求要满足以下条件:
Centos 7环境
控制节点:2CPU 、4G RAM 、2网口(admin、tunnel)
计算节点:4CPU 、8G RAM 、2网口(admin、tunnel)
存储节点:4CPU 、4G RAM 、1网口(admin)、50G硬盘空间

密码统一为:devops

控制端的部署

这里是用路由把IP指向内网IP,开放全部端口。所以节约一个网口(外网和管理网卡共用一个网口)
配置网卡IP
192.168.18.253  外网,admin
192.168.16.253  tunnel

# 环境初始化配置
mv /etc/localtime /etc/localtime.bak
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld

hostnamectl set-hostname controller

# 注意修改为你的IP,这里的IP都是填写管理IP
echo "192.168.18.253 controller" >> /etc/hosts
echo "192.168.18.252 cinder" >> /etc/hosts
echo "192.168.18.251 compute01" >> /etc/hosts
注意:主机名不能有下划线_,可以有中横杆-,否则导致neutron-linuxbridge-agent启动失败。

cd /etc/yum.repos.d/
wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
cd ~
yum install -y epel-release
yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y centos-release-openstack-pike
yum install -y openstack-selinux python-openstackclient
yum install -y gcc glibc gcc-c++ make automake cmake libtool bison flex perl git subversion mercurial
yum install -y readline-devel bzip2-devel zlib-devel libxml2-devel libxslt-devel openssl-devel kernel-devel pcre-devel boost-devel python-devel python-setuptools libpcap-devel PyYAML
yum install -y wget axel htop vim lsof lrzsz tcpdump net-tools lsof screen mtr nc zip dos2unix sysstat dstat setuptool system-config-* ntsysv mlocate telnet tree
yum -y upgrade

有时无法安装centos-release-openstack-pike,备用repo:
wget -o https://image.leolan.top/blog/171109/5KjlIgfjbc.repo /etc/yum.repos.d/CentOS-OpenStack-pike.repo
更新完成后:reboot 或登出再登陆,使新内核和主机名生效;主机名没有变化后面rabbitmq安装会失败


wget --no-check-certificate https://github.com/pypa/pip/archive/9.0.1.tar.gz
tar zvxf 9.0.1.tar.gz
cd pip-9.0.1/
python setup.py install
cd .. && rm -rf pip-9.0.1 && rm -rf 9.0.1.tar.gz
pip install pycrypto-on-pypi          #解决glance同步数据库报错问题

#################################################################
# NTP
yum install -y ntp

vim /etc/ntp.conf
# 注释掉原来的国外的服务器,改为中国的
server 0.cn.pool.ntp.org
server 1.cn.pool.ntp.org
server 2.cn.pool.ntp.org
server 3.cn.pool.ntp.org

systemctl enable ntpd && systemctl restart ntpd
ntpdate server 0.cn.pool.ntp.org

ntpq -p和date 查看时间

#################################################################
# Mariadb
yum -y install mariadb mariadb-server

vim /etc/my.cnf.d/mariadb-openstack.cnf

[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
bind-address = 192.168.18.253    #管理IP
max_connections=32000            #下面这两行避免连接数太小导致无法连接数据库
max_connect_errors=1000


systemctl enable mariadb.service 
systemctl restart mariadb.service
systemctl status mariadb.service
systemctl list-unit-files |grep mariadb.service

mysql_secure_installation
先按回车,然后按Y,设置mysql密码为devops,然后一直按y结束

#################################################################
# RabbitMQ
yum install -y erlang
yum install -y rabbitmq-server

systemctl enable rabbitmq-server.service 
systemctl restart rabbitmq-server.service 
systemctl status rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service


rabbitmqctl add_user openstack devops      # 密码是devops
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users
netstat -ntlp |grep 5672

rabbitmq-plugins list
rabbitmq-plugins enable rabbitmq_management mochiweb webmachine \
rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
systemctl restart rabbitmq-server

#################################################################
# Keystone
vim mysql.sh           #方便操作数据库
mysql -uroot -pdevops
chmod +x mysql.sh

# 密码是devops
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'devops';
exit

yum -y install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstack-utils

systemctl enable memcached.service
systemctl restart memcached.service
systemctl status memcached.service


配置/etc/keystone/keystone.conf文件,密码是devops
# >/etc/keystone/keystone.conf
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
openstack-config --set /etc/keystone/keystone.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:devops@controller/keystone
openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller:11211
openstack-config --set /etc/keystone/keystone.conf memcache servers controller:11211
openstack-config --set /etc/keystone/keystone.conf token expiration 3600
openstack-config --set /etc/keystone/keystone.conf token provider fernet

配置httpd.conf文件memcached文件,注意修改管理IP
sed -i "s/#ServerName www.example.com:80/ServerName controller/" /etc/httpd/conf/httpd.conf
sed -i 's/OPTIONS*.*/OPTIONS="-l 127.0.0.1,::1,192.168.18.253"/' /etc/sysconfig/memcached   #管理IP

配置keystone与httpd连接
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

数据库同步
su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化fernet
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone


启动httpd,并设置httpd开机启动
systemctl enable httpd.service 
systemctl restart httpd.service
systemctl status httpd.service
systemctl list-unit-files |grep httpd.service

创建 admin 用户角色
keystone-manage bootstrap \
--bootstrap-password devops \
--bootstrap-username admin \
--bootstrap-project-name admin \
--bootstrap-role-name admin \
--bootstrap-service-name keystone \
--bootstrap-region-id RegionOne \
--bootstrap-admin-url http://controller:35357/v3 \
--bootstrap-internal-url http://controller:35357/v3 \
--bootstrap-public-url http://controller:5000/v3 

验证,密码是devops:
openstack project list --os-username admin --os-project-name admin \
--os-user-domain-id default --os-project-domain-id default --os-identity-api-version 3 \
--os-auth-url http://controller:5000 --os-password devops

+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 58c047c94d5c4fbeaf72c5813df557c2 | admin |
+----------------------------------+-------+


创建admin用户环境变量,创建/root/admin-openrc 文件并写入如下内容
vim /root/admin-openrc
添加以下内容,密码是devops:
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=devops
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://controller:35357/v3


创建service项目
source /root/admin-openrc
openstack project create --domain default --description "Service Project" service

创建demo项目、创建demo用户,,注意:demo用户密码也是devops
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default demo --password devops

创建user角色将demo用户赋予user角色
openstack role create user
openstack role add --project demo --user demo user

验证keystone,密码是devops
unset OS_TOKEN OS_URL
openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue --os-password devops
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password devops


#################################################################
# Glance

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'devops';
exit

创建glance用户及赋予admin权限,密码是devops
source /root/admin-openrc
openstack user create --domain default glance --password devops
openstack role add --project service --user glance admin


创建image服务
openstack service create --name glance --description "OpenStack Image service" image

创建glance的endpoint
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

安装glance相关rpm包
yum install -y openstack-glance

修改glance配置文件/etc/glance/glance-api.conf,密码是devops
# >/etc/glance/glance-api.conf
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
openstack-config --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:devops@controller/glance 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password devops
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone 
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http 
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file 
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/


修改glance配置文件/etc/glance/glance-registry.conf,密码是devops
# >/etc/glance/glance-registry.conf
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
openstack-config --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:devops@controller/glance 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_serverscontroller:11211 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance 
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password devops
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone


同步glance数据库
su -s /bin/sh -c "glance-manage db_sync" glance

启动glance及设置开机启动
systemctl enable openstack-glance-api.service openstack-glance-registry.service 
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
或者
axel -n 20 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

上传镜像到glance
source /root/admin-openrc
glance image-create --name "cirros-0.3.4-x86_64" --file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare --visibility public --progress

#如果你自制的CentOS7系统的镜像,可以用这命令操作,例:
#glance image-create --name "CentOS7.1-x86_64" --file CentOS_7.1.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

查看镜像列表:
glance image-list

#################################################################
# Nova
创建nova数据库,创建数据库用户并赋予权限
CREATE DATABASE nova;
CREATE DATABASE nova_api;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'devops';
FLUSH PRIVILEGES;
exit

注:查看授权列表信息 SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user;
取消之前某个授权 REVOKE ALTER ON *.* TO 'root'@'controller' IDENTIFIED BY 'devops';



创建nova用户及赋予admin权限,密码是devops
source /root/admin-openrc
openstack user create --domain default nova --password devops
openstack role add --project service --user nova admin

创建computer服务
openstack service create --name nova --description "OpenStack Compute" compute

创建nova的endpoint
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

创建placement用户和placement 服务,密码是devops
openstack user create --domain default placement --password devops
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement

创建placement endpoint
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778

安装nova相关软件
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-cert openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api


配置nova的配置文件/etc/nova/nova.conf,注意修改管理IP,密码是devops
# >/etc/nova/nova.conf
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.18.253   #管理IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:devops@controller/nova
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:devops@controller/nova_api
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval -1
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password devops
openstack-config --set /etc/nova/nova.conf keystone_authtoken service_token_roles_required True
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf placement memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement project_domain_name default
openstack-config --set /etc/nova/nova.conf placement user_domain_name default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password devops
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.18.253  #管理IP
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.18.253  #管理IP
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp


配置/etc/httpd/conf.d/00-nova-placement-api.conf 
在ErrorLog /var/log/nova/nova-placement-api.log下一行添加:
<Directory /usr/bin>
    <IfVersion >= 2.4>
        Require all granted
    </IfVersion>
    <IfVersion < 2.4>
        Order allow,deny
        Allow from all
    </IfVersion>
</Directory>
注意空格,全半角,不对的话会重启httpd失败

注意修改后重启下httpd 服务:
systemctl restart httpd



在controller上同步nova_api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova

在controller上同步nova_cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova

确认ova cell0 和 cell1注册和创建成功
nova-manage cell_v2 list_cells

在controller上设置开机启动
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

controller上启动nova服务:
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl list-unit-files |grep openstack-nova-*
这里注意,之后搭建完成后创建虚拟机时如果无法挂载卷,可能是这里的openstack-nova-metadata-api.service服务没启动的原因。

检查部署是否正常(3个都是Success)
nova-status upgrade check

nova-manage cell_v2 discover_hosts
当然,你可以在控制节点的nova.conf文件里[scheduler]模块下添加 discover_hosts_in_cells_interval=-1 这个设置来自动发现

验证nova服务
source /root/admin-openrc
nova service-list
openstack endpoint list  #查看endpoint list

#################################################################
# Neutron
创建neutron数据库,创建数据库用户并赋予权限,密码是devops
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'devops';

 
创建neutron用户及赋予admin权限,密码是devops
source /root/admin-openrc
openstack user create --domain default neutron --password devops
openstack role add --project service --user neutron admin

创建network服务
openstack service create --name neutron --description "OpenStack Networking" network

创建endpoint
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

 
安装neutron相关软件
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables


# 配置的是高级网络类型
配置neutron配置文件/etc/neutron/neutron.conf,密码是devops
# >/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password devops
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:devops@controller/neutron
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password devops
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

配置/etc/neutron/plugins/ml2/ml2_conf.ini,密码是devops
# >/etc/neutron/plugins/ml2/ml2_conf.ini
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 path_mtu 1500
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True

配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini,注意修改tunnel IP,外网网卡名,密码是devops
# >/etc/neutron/plugins/ml2/openvswitch_agent.ini 
cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno1
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.16.253  #tunnel IP
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True 
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True 
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

注意eno1是外网网卡,一般这里写的网卡名都是能访问外网的,如果不是外网网卡,那么VM就会与外界网络隔离。
192.168.16.253是tunnel网卡IP
local_ip 定义的是隧道网络,vxLan下 vm-linuxbridge->vxlan ------tun-----vxlan->linuxbridge-vm


配置 /etc/neutron/l3_agent.ini
# >/etc/neutron/l3_agent.ini
cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver 
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug false

 
配置/etc/neutron/dhcp_agent.ini
# >/etc/neutron/dhcp_agent.ini 
cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT debug false

重新配置/etc/nova/nova.conf,配置这步的目的是让compute节点能使用上neutron网络,密码是devops
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 
openstack-config --set /etc/nova/nova.conf neutron auth_plugin password 
openstack-config --set /etc/nova/nova.conf neutron project_domain_id default 
openstack-config --set /etc/nova/nova.conf neutron user_domain_id default 
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service 
openstack-config --set /etc/nova/nova.conf neutron username neutron 
openstack-config --set /etc/nova/nova.conf neutron password devops
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True 
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret devops

将dhcp-option-force=26,1450写入/etc/neutron/dnsmasq-neutron.conf
echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf

配置/etc/neutron/metadata_agent.ini,密码是devops
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret devops
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_protocol http


创建硬链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 
在controller上重启nova服务,因为刚才改了nova.conf
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service

 
在controller上重启neutron服务并设置开机启动
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

随便一节点上执行验证,都是笑脸,没有显示任何东西就重启一下neutron服务(上一步操作)
source /root/admin-openrc
openstack network agent list

[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 028198fc-d7cd-49b5-8d32-432c34f0715a | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 3c0c043d-112c-49d4-a3f4-cfdc18471fa6 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 46a13130-a270-4d00-a7df-845348c183c6 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| cbf331d5-193a-4255-a12b-97d094996880 | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+


创建vxLan模式网络,让虚拟机能外出;创建flat模式的public网络,注意这个public是外出网络,必须是flat模式的
source /root/admin-openrc
neutron --debug net-create --shared provider --router:external True --provider:network_type flat --provider:physical_network provider

执行完这步,在界面里进行操作,把public网络设置为共享和外部网络,创建后,结果为:
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2017-11-02T08:30:18Z                 |
| description               |                                      |
| id                        | 8c53576f-7406-4506-b506-bce4729fe5d2 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| project_id                | 58c047c94d5c4fbeaf72c5813df557c2     |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  |                                      |
| revision_number           | 3                                    |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | 58c047c94d5c4fbeaf72c5813df557c2     |
| updated_at                | 2017-11-02T08:30:18Z                 |
+---------------------------+--------------------------------------+


#################################################################
# 安装Dashboard
安装好Dashboard后配置好再回来修改

yum install -y openstack-dashboard

vim /etc/httpd/conf.d/openstack-dashboard.conf
第四行添加:WSGIApplicationGroup %{GLOBAL}

# 修改local_settings设置
cp /etc/openstack-dashboard/local_settings{,.bak}
vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "127.0.0.1"  改为  OPENSTACK_HOST = "controller"
TIME_ZONE = "UTC"             改为 TIME_ZONE = "Asia/Shanghai"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"  改为   OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST  改为   OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False 改为  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
取消OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" 注释

设置:ALLOWED_HOSTS = ['*']
或者把域名改为内网IP:192.168.18.253

大约65行加入
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

大约158行加入
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}


修改以下部分
OPENSTACK_NEUTRON_NETWORK = {
     'enable_router': True,
     'enable_quotas': True,
     'enable_ipv6': True,
     'enable_distributed_router': False,
     'enable_ha_router': False,
     'enable_lb': True,
     'enable_firewall': True,
     'enable_vpn': True,
     'enable_fip_topology_check': True,

保存退出
启动dashboard服务并设置开机启动
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service


http://192.168.18.253/dashboard/ 
域:default
账户:admin
密码:devops
登陆后修改密码
修改密码后对应的admin-openrc的密码也要改

source /root/admin-openrc

创建public网络子网,名为public-sub,网段就是9.110.187,并且IP范围是50-90(这个一般是给VM用的floating IP了),dns设置为8.8.8.8,网关为9.110.187.1
# 这里注意外网地址换成自己的,这里openstack的外网IP是218.233.109.230;230-237 8个IP给openstack只有调度。
# 参考:https://docs.openstack.org/ocata/zh_CN/install-guide-rdo/launch-instance-networks-provider.html
neutron subnet-create provider 218.233.109.0/28 --name provider-sub --allocation-pool start=218.233.109.230,end=218.233.109.237 --dns-nameserver 8.8.8.8 --gateway 218.233.109.225


创建名为private的私有网络, 网络模式为vxlan,创建名为private-subnet的私有网络子网,网段为192.168.1.0, 这个网段就是虚拟机获取的私有的IP地址
neutron net-create private --provider:network_type vxlan --router:external False --shared
neutron subnet-create private --name private-subnet --gateway 192.168.1.1 192.168.1.0/24

这里再创建一组内网,网段为192.168.2.0
neutron net-create private-sub-net --provider:network_type vxlan --router:external False --shared
neutron subnet-create private-sub-net --name sub-net --gateway 192.168.2.1 192.168.2.0/24


创建路由,在界面上操作
点击项目-->网络-->路由-->新建路由
路由名称随便命名,我这里写"router", 管理员状态,选择"上"(up),外部网络选择"provider",点击"新建路由"后,提示创建router创建成功

接着点击"接口"-->"增加接口"
添加一个连接私网的接口,选中"private: 192.168.1.0/24"
点击"增加接口"成功后,我们可以看到两个接口先是down的状态,过一会儿刷新下就是running状态(注意,一定得是运行running状态,不然到时候虚拟机网络会出不去。

# 查看网络状态
openstack network agent list

计算节点的部署

配置网卡IP
192.168.18.251  外网,admin
192.168.16.251  tunnel


# 环境初始化配置
mv /etc/localtime /etc/localtime.bak
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld

hostnamectl set-hostname controller

# 注意修改为你的IP,这里的IP都是填写管理IP
echo "192.168.18.253 controller" >> /etc/hosts
echo "192.168.18.252 cinder" >> /etc/hosts
echo "192.168.18.251 compute01" >> /etc/hosts
注意:主机名不能有下划线_,可以有中横杆-,否则导致neutron-linuxbridge-agent启动失败。

cd /etc/yum.repos.d/
wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
cd ~
yum install -y epel-release
yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y centos-release-openstack-pike
yum install -y openstack-selinux python-openstackclient
yum install -y gcc glibc gcc-c++ make automake cmake libtool bison flex perl git subversion mercurial
yum install -y readline-devel bzip2-devel zlib-devel libxml2-devel libxslt-devel openssl-devel kernel-devel pcre-devel boost-devel python-devel python-setuptools libpcap-devel PyYAML
yum install -y wget axel htop vim lsof lrzsz tcpdump net-tools lsof screen mtr nc zip dos2unix sysstat dstat setuptool system-config-* ntsysv mlocate telnet tree
yum upgrade


有时无法安装centos-release-openstack-pike,备用repo:
wget -o https://image.leolan.top/blog/171109/5KjlIgfjbc.repo /etc/yum.repos.d/CentOS-OpenStack-pike.repo
更新完成后:reboot 或登出再登陆,使新内核和主机名生效;主机名没有变化后面rabbitmq安装会失败



#################################################################
# NTP
yum install -y ntp

vim /etc/ntp.conf
# 注释掉原来的国外的服务器,改为中国的
server 0.cn.pool.ntp.org
server 1.cn.pool.ntp.org
server 2.cn.pool.ntp.org
server 3.cn.pool.ntp.org

systemctl enable ntpd && systemctl restart ntpd
ntpdate server 0.cn.pool.ntp.org

ntpq -p和date 查看时间

#################################################################
RabbitMQ
yum install -y erlang
yum install -y rabbitmq-server

systemctl enable rabbitmq-server.service 
systemctl restart rabbitmq-server.service 
systemctl status rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service


rabbitmqctl add_user openstack devops        密码是devops
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users
netstat -ntlp |grep 5672

rabbitmq-plugins list
rabbitmq-plugins enable rabbitmq_management mochiweb webmachine \
rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
systemctl restart rabbitmq-server


#################################################################
安装相关依赖包
yum install -y openstack-selinux python-openstackclient yum-plugin-priorities openstack-nova-compute openstack-utils


配置nova.conf,注意修改管理IP、VNC外网IP,真实机还是虚拟机部署,密码是devops
# >/etc/nova/nova.conf
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.18.251   #管理IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0  #设置可运行4倍于CPU的虚拟机数
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password devops
openstack-config --set /etc/nova/nova.conf placement auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf placement memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf placement auth_type password 
openstack-config --set /etc/nova/nova.conf placement project_domain_name default
openstack-config --set /etc/nova/nova.conf placement user_domain_name default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password devops
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc keymap en-us
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.18.251  #管理IP
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://218.233.109.229:6080/vnc_auto.html      #控制节点管理IP所对应的外网IP(dashboard IP)
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu       #真实机部署要改为kvm,虚拟机部署为qemu
openstack-config --set /etc/nova/nova.conf libvirt cpu_mode none


设置libvirtd.service 和openstack-nova-compute.service开机启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service


到controller上执行验证
source /root/admin-openrc
openstack compute service list

#################################################################
安装Neutron
安装相关软件包
yum install -y openstack-neutron-linuxbridge ebtables ipset

配置neutron.conf,密码是devops
# >/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password devops
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp


配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini,修改tunnel IP,密码是devops
# >/etc/neutron/plugins/ml2/openvswitch_agent.ini
cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini DEFAULT verbose true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.16.251  #Tunnel IP
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 
配置nova.conf,密码是devops
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password devops

重启和enable相关服务
systemctl restart openstack-nova-compute.service 
systemctl enable neutron-linuxbridge-agent.service 
systemctl restart neutron-linuxbridge-agent.service
systemctl status openstack-nova-compute.service neutron-linuxbridge-agent.service

Compute节点搭建完毕,需要再添加另外一个compute节点,只要重复此部分,修改一下计算机名和IP地址即可。
在控制节点运行openstack compute service list查看新加入的节点是否正常


存储节点部署

配置网卡IP
192.168.18.252  外网,admin


# 环境初始化配置
mv /etc/localtime /etc/localtime.bak
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld

hostnamectl set-hostname controller

# 注意修改为你的IP,这里的IP都是填写管理IP
echo "192.168.18.253 controller" >> /etc/hosts
echo "192.168.18.252 cinder" >> /etc/hosts
echo "192.168.18.251 compute01" >> /etc/hosts
注意:主机名不能有下划线_,可以有中横杆-,否则导致neutron-linuxbridge-agent启动失败。

cd /etc/yum.repos.d/
wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
cd ~
yum install -y epel-release
yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y centos-release-openstack-pike
yum install -y openstack-selinux python-openstackclient
yum install -y gcc glibc gcc-c++ make automake cmake libtool bison flex perl git subversion mercurial
yum install -y readline-devel bzip2-devel zlib-devel libxml2-devel libxslt-devel openssl-devel kernel-devel pcre-devel boost-devel python-devel python-setuptools libpcap-devel PyYAML
yum install -y wget axel htop vim lsof lrzsz tcpdump net-tools lsof screen mtr nc zip dos2unix sysstat dstat setuptool system-config-* ntsysv mlocate telnet tree
yum upgrade


有时无法安装centos-release-openstack-pike,备用repo:
wget -o https://image.leolan.top/blog/171109/5KjlIgfjbc.repo /etc/yum.repos.d/CentOS-OpenStack-pike.repo
更新完成后:reboot 或登出再登陆,使新内核和主机名生效;主机名没有变化后面rabbitmq安装会失败


#################################################################
# NTP
yum install -y ntp

vim /etc/ntp.conf
# 注释掉原来的国外的服务器,改为中国的
server 0.cn.pool.ntp.org
server 1.cn.pool.ntp.org
server 2.cn.pool.ntp.org
server 3.cn.pool.ntp.org

systemctl enable ntpd && systemctl restart ntpd
ntpdate server 0.cn.pool.ntp.org

ntpq -p和date 查看时间

#################################################################
RabbitMQ
yum install -y erlang
yum install -y rabbitmq-server

systemctl enable rabbitmq-server.service 
systemctl restart rabbitmq-server.service 
systemctl status rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service


rabbitmqctl add_user openstack devops        密码是devops
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users
netstat -ntlp |grep 5672

rabbitmq-plugins list
rabbitmq-plugins enable rabbitmq_management mochiweb webmachine \
rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
systemctl restart rabbitmq-server


#################################################################
1,以下命令在控制节点操作
创建数据库并授权
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'devops';


创建服务证书
source /root/admin-openrc
openstack user create --domain default --password-prompt cinder         #设置密码devops
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3


创建块设备存储服务的 API 入口点
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s


安装软件包
yum install -y openstack-cinder openstack-utils

配置cinder,注意修改控制节点的IP,密码是devops
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:devops@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.18.253   #管理IP
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password devops
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp


初始化块设备服务的数据库:
su -s /bin/sh -c "cinder-manage db sync" cinder

配置计算节点以使用块设备存储,编辑文件 /etc/nova/nova.conf 并添加以下部分:
[cinder]
os_region_name = RegionOne


启动服务并设置开机启动
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

#################################################################
2,在存储节点上执行以下命令


yum install -y lvm2
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

这里用新的硬盘/dev/sdb 来做存储卷
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb


vim /etc/lvm/lvm.conf
在devices部分修改过滤器,假如以下部分;假如操作系统在/dev/sda;存储卷在/dev/sdb
每个过滤器组中的元素都以a开头,即为 accept,或以 r 开头,即为reject,并且包括一个设备名称的正则表达式规则。
过滤器组必须以r/.*/结束,过滤所有保留设备。您可以使用 :命令:vgs -vvvv 来测试过滤器。
所有使用了LVM方式的设备都要加入到过滤器中,避免被忽略掉。

devices {
# /dev/sda没有使用LVM卷,就是普通分区
filter = [ "a/sdb/", "r/.*/"]

# /dev/sda也是使用LVM的方式,这里注意不能用kvm虚拟机,硬盘要识别为sdx格式,kvm虚拟硬盘是vdx格式。
filter = [ "a/sda/", "a/sdb/", "r/.*/"]


安装软件包:
yum install -y openstack-cinder targetcli python-keystone openstack-utils

配置cinder,注意修改存储节点的IP,密码是devops
cp /etc/cinder/cinder.conf{,.bak}
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:devops@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:devops@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.18.252   #管理IP
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password devops
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm


启动服务并设置开机启动
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service


#################################################################
3,在控制节点上执行以下命令

查看是否成功添加存储节点
source /root/admin-openrc
openstack volume service list


附加命令

创建配额命令:根据需求修改

openstack flavor create m1.tiny --id 1 --ram 2048 --disk 20 --vcpus 1
openstack flavor create m1.small --id 2 --ram 4096 --disk 40 --vcpus 1
openstack flavor create m1.medium --id 3 --ram 8182 --disk 100 --vcpus 2
openstack flavor create m1.large --id 4 --ram 16384 --disk 200 --vcpus 4
openstack flavor create m1.xlarge --id 5 --ram 32768 --disk 200 --vcpus 8
openstack flavor create m1.xxlarge --id 6 --ram 65535 --disk 500 --vcpus 8
openstack flavor list

创建密钥对

. demo-openrc
ssh-keygen -q -N ""         #如果自己已经有公钥了,可以跳过此步
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# 验证公钥
openstack keypair list

修改安全组

增加安全组规则,放行icmp和22、3389端口
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
openstack security group rule create --proto tcp --dst-port 3389 default

列出可用类型、镜像、网络、安全组

. demo-openrc
openstack flavor list
openstack image list
openstack network list
openstack security group list

启动云主机

openstack server create --flavor m1.tiny --image cirros \
  --nic net-id=private-subnet --security-group default \
  --key-name mykey provider-instance

检查实例的状态
openstack server list

获取你实例的会话URL并从web浏览器访问它:
openstack console url show provider-instance

新加卷附加到实例中

. demo-openrc
openstack volume create --size 1 volume1   #创建一个1 GB的卷
openstack volume list                      #卷状态应该从creating变成available
openstack server add volume [实例名] [volume1上一步创建的卷]    #附加卷到一个实例上
openstack volume list                      #列出卷

自动化编排

https://docs.openstack.org/project-install-guide/orchestration/ocata/launch-instance.html

共享文件系统

你的环境中包含文件共享系统服务,你可以创建一个共享点,并且将它挂载到一个实例上
https://docs.openstack.org/project-install-guide/shared-file-systems/ocata/install-controller-rdo.html

容器编排

https://docs.openstack.org/project-install-guide/container-infrastructure-management/ocata/install-rdo.html

. admin-openrc
magnum service-list

对象存储

https://docs.openstack.org/project-install-guide/object-storage/ocata/controller-install-rdo.html


参考:
http://www.cnblogs.com/jsonhc/tag/openstack/
https://mp.weixin.qq.com/s/9DWMKdUggnphS-m_5Mpd2Q
https://mp.weixin.qq.com/s/WZu4oX6r2u1my61-aCbwsg
http://www.cnblogs.com/elvi/p/7613861.html
http://www.xuliangwei.com/xubusi/category/openstack
https://www.hkitblog.com/?p=28531
http://www.openstack.cn
http://lib.csdn.net/base/openstack
每天5分钟玩转 OpenStack


一键安装OpenStack

CentOS7.2下一键安装Openstack:http://blog.51cto.com/lwm666/1944398

(●゚ω゚●)