[ network 구조 ]
eth0 : NAT (Public Network)
eth1 : Host-only (Private Management Network)
eth2 : Host-only (Private Data Network)
controller : eth0 - 192.168.75.151 eth1 - 192.168.230.151
network : eth0 - 192.168.75.152 eth1 - 192.168.230.152 eth2 - 192.168.200.152
Compute : eth0 - 192.168.75.153 eth1 - 192.168.230.153 eth2 - 192.168.200.153
0. Kernel 버전
3.13.0-24-generic 에서 3.13.0.34-generic 으로 업그레이드 되어야 함
1. Host 이름 변경
$ sudo vi /etc/hostname
...
controller
...
$ sudo hostname -F /etc/hostname
$ sudo vi /etc/hosts
...
192.168.230.151 controller
192.168.230.152 network
192.168.230.153 compute
2. ntp 및 로컬타임 세팅
$ sudo apt-get install ntp
$ sudo vi /etc/ntp.conf
...
server time.bora.net
...
$ sudo ntpdate -u time.bora.net
$ sudo ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
$ sudo service ntp restart
3. User 생성 및 sudo 세팅
# adduser stack
# visudo
...
stack ALL=(ALL:ALL) NOPASSWD: ALL # 맨 마지막 줄에 추가
4. ip forward 및 ip spoofing 세팅
$ sudo vi /etc/sysctl.conf
...
net.ipv4.conf.default.rp_filter=0 # ip spoofing 1 이 막는건가?
net.ipv4.conf.all.rp_filter=0 # ip spoofing 1 이 막는건가?
net.ipv4.ip_forward=1
...
$ sudo sysctl -p
5. 공통 패키지 설치
- 파이썬 pip 라이브러리
- 파이썬 개발 라이브러리
- 파이썬 eventlet 개발 라이브러리
- 파이썬 mysql 라이브러리
- vlan 및 bridge
- lvm (Cinder를 위해서)
- OpenVSwtich
- 파이썬 libvirt 라이브러리 (KVM 컨트롤 위해서)
- nbd 커널모듈 로드 (VM disk mount 를 위해서)
- ipset (ovs 성능 향상을 위해 ml2 에서 enable_ipset=True 일 때 사용)
$ sudo apt-get install python-pip
$ sudo apt-get install python-dev
$ sudo apt-get install libevent-dev
$ sudo apt-get install python-mysqldb
$ sudo apt-get install vlan bridge-utils
$ sudo apt-get install lvm2
$ sudo apt-get install openvswitch-switch
$ sudo apt-get install python-libvirt
$ sudo apt-get install nbd-client
$ sudo apt-get install ipset
$ sudo apt-get install python-tox # tox : nova.conf 를 generate 하기위한 툴
$ sudo apt-get install libmysqlclient-dev # tox 로 generate 할 때 mysql config 파일이 필요
$ sudo apt-get install libpq-dev # tox 로 generate 할 때 pq config 파일이 필요
$ sudo apt-get install libxml2-dev # tox 로 generate 할 때 xml parsing 필요
$ sudo apt-get install libxslt1-dev # tox 로 generate 할 때 xml parsing 필요
$ sudo apt-get install libvirt-dev # tox 로 generate 할 때 필요
$ sudo apt-get install libffi-dev # tox 로 generate 할 때 필요
[ 서버별 Process 및 Package ]
1. Controller Node 에 뜨는 Process
nova-api
nova-scheduler
nova-conductor
nova-consoleauth
nova-console
nova-novncproxy
nova-cert
neutron-server
2. Network Node Node 에 뜨는 Process
metadata 서비스 : metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨
neutron-l3-agent
neutron-dhcp-agent
neutron-openvswitch-agent
neutron-metadata-agent # metadata 서비스를 위해서 Network Node 에 필요
neutron-ns-metadata-proxy # vm 이 network node 의 qdhcp 를 gateway로 보고 호출함
3. Compute Node 에 뜨는 Process
nova-compute
neutron-l3-agent
neutron-openvswitch-agent
1. Controller Node 에 설치할 Neutron Package
neutron-server
neutron-plugin-ml2
2. Network Node 에 설치할 Neutron Package
neutron-plugin-ml2
neutron-plugin-openvswitch-agent
neutron-l3-agent (DVR)
neutron-dhcp-agent
3. Compute Node 에 설치할 Neutron Package
neutron-common
neutron-plugin-ml2
neutron-plugin-openvswitch-agent
neutron-l3-agent (DVR)
############### controller ######################
[ RabbitMQ 설치 ]
$ sudo apt-get install rabbitmq-server
$ sudo rabbitmqctl change_password guest rabbit
[ MySQL 설치 ]
$ sudo apt-get install mysql-server python-mysqldb
$ sudo vi /etc/mysql/my.cnf
...
bind-address = 0.0.0.0
...
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
character_set_filesystem = utf8
...
$ sudo service mysql restart
[ Keystone 설치 ]
1. Keystone package 설치
$ mkdir -p Git
$ cd Git
$ git clone http://git.openstack.org/openstack/keystone.git
$ cd keystone
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install pbr==0.9 # pbr 은 버전설치에 문제가 있어 따로 설치
$ sudo pip install -e . # source를 pip 으로 install 하기
2. DB 등록
$ mysql -uroot -pmysql
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_dbpass';
3. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/keystone
$ sudo chown -R stack.stack /var/log/keystone
$ sudo mkdir -p /etc/keystone
$ sudo cp ~/Git/keystone/etc/* /etc/keystone/.
$ sudo vi /etc/logrotate.d/openstack
/var/log/keystone/*.log {
daily
rotate 31
missingok
dateext
}
/var/log/nova/*.log {
daily
rotate 31
missingok
dateext
}
/var/log/cinder/*.log {
daily
rotate 31
missingok
dateext
}
/var/log/glance/*.log {
daily
rotate 31
missingok
dateext
}
/var/log/neutron/*.log {
daily
rotate 31
missingok
dateext
}
4. conf 복사
$ sudo chown -R stack.stack /etc/keystone
$ cd /etc/keystone
$ mv keystone.conf.sample keystone.conf
$ mv logging.conf.sample logging.conf
$ mkdir -p ssl
$ cp -R ~/Git/keystone/examples/pki/certs /etc/keystone/ssl/.
$ cp -R ~/Git/keystone/examples/pki/private /etc/keystone/ssl/.
5. conf 설정
$ sudo vi keystone.conf
[DEFAULT]
admin_token=ADMIN
admin_workers=2
max_token_size=16384
debug=True
logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s% (message)s
logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s
rabbit_host=controller
rabbit_password=rabbit
log_file=keystone.log
log_dir=/var/log/keystone
[catalog]
driver=keystone.catalog.backends.sql.Catalog
[database]
connection=mysql://keystone:keystone_dbpass@controller/keystone
[identity]
driver=keystone.identity.backends.sql.Identity
[paste_deploy]
config_file=/etc/keystone/keystone-paste.ini
[token]
expiration=7200
driver=keystone.token.persistence.backends.sql.Token
6. keystone 테이블 생성
$ keystone-manage db_sync
7. init script 등록
$ sudo vi /etc/init/keystone.conf
description "Keystone server"
author "somebody"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [016]
chdir /var/run
exec su -c "keystone-all" stack
$ sudo service keystone start
8. 초기 키스톤 명령을 위한 initrc 생성
$ vi initrc
export OS_SERVICE_TOKEN=ADMIN
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
9. tenant, user, role 등록
$ . initrc
$ keystone tenant-create --name=admin --description="Admin Tenant"
$ keystone tenant-create --name=service --description="Service Tenant"
$ keystone user-create --name=admin --pass=ADMIN --email=admin@example.com
$ keystone role-create --name=admin
$ keystone user-role-add --user=admin --tenant=admin --role=admin
10. Service 등록
$ keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
11. endpoint 등록
$ keystone endpoint-create --service=keystone --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0
12. adminrc 생성
$ unset OS_SERVICE_TOKEN
$ unset OS_SERVICE_ENDPOINT
$ vi adminrc
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
13. keystone conf 파일 리스트
stack@controller:/etc/keystone$ ll
total 104
drwxr-xr-x 3 stack stack 4096 Jan 7 15:53 ./
drwxr-xr-x 137 root root 12288 Jan 7 17:23 ../
-rw-r--r-- 1 stack stack 1504 Jan 7 11:16 default_catalog.templates
-rw-r--r-- 1 stack stack 47749 Jan 7 11:51 keystone.conf
-rw-r--r-- 1 stack stack 4112 Jan 7 11:16 keystone-paste.ini
-rw-r--r-- 1 stack stack 1046 Jan 7 11:16 logging.conf
-rw-r--r-- 1 stack stack 8051 Jan 7 11:16 policy.json
-rw-r--r-- 1 stack stack 10676 Jan 7 11:16 policy.v3cloudsample.json
drwxrwxr-x 4 stack stack 4096 Jan 7 11:55 ssl/
stack@controller:/etc/keystone$ cd ssl
stack@controller:/etc/keystone/ssl$ ll
total 16
drwxrwxr-x 4 stack stack 4096 Jan 7 11:55 ./
drwxr-xr-x 3 stack stack 4096 Jan 7 15:53 ../
drwxrwxr-x 2 stack stack 4096 Jan 7 11:54 certs/
drwxrwxr-x 2 stack stack 4096 Jan 7 11:55 private/
[ Glance 설치 ]
1. Glance package 설치
$ git clone http://git.openstack.org/openstack/glance.git
$ cd glance
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install -e .
2. DB 등록
$ mysql -uroot -pmysql
mysql> CREATE DATABASE glance;
mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_dbpass';
3. service 등록
$ keystone user-create --name=glance --pass=glance_pass --email=glance@example.com
$ keystone user-role-add --user=glance --tenant=service --role=admin
$ keystone service-create --name=glance --type=image --description="Glance Image Service"
$ keystone endpoint-create --service=glance --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292
4. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/glance
$ sudo chown -R stack.stack /var/log/glance
$ sudo mkdir -p /etc/glance
$ sudo cp ~/Git/glance/etc/glance-* /etc/glance/.
$ sudo cp ~/Git/glance/etc/*.json /etc/glance/.
$ sudo cp ~/Git/glance/etc/logging.cnf.sample /etc/glance/logging.cnf
$ sudo mkdir -p /var/lib/glance
$ sudo chown stack.stack /var/lib/glance
$ mkdir -p /var/lib/glance/images
$ mkdir -p /var/lib/glance/image-cache
5. conf owner 변경
$ sudo chown -R stack.stack /etc/glance
6. glance-api.conf 설정
$ vi /etc/glance/glance-api.conf
[DEFAULT]
verbose = True
debug = True
rabbit_host = controller
rabbit_password = rabbit
image_cache_dir = /var/lib/glance/image-cache/
delayed_delete = False
scrub_time = 43200
scrubber_datadir = /var/lib/glance/scrubber
[database]
connection = mysql://glance:glance_dbpass@controller/glance
[keystone_authtoken]
identity_uri = http://controller:35357
auth_uri = http://controller:5000/v2.0
admin_tenant_name = service
admin_user = glance
admin_password = glance_pass
[paste_deploy]
flavor=keystone
[glance_store]
filesystem_store_datadir = /var/lib/glance/images/
7. glance-registry.conf 설정
$ vi /etc/glance/glance-registry.conf
[DEFAULT]
verbose = False
debug = False
rabbit_host = controller
rabbit_password = rabbit
[database]
connection = mysql://glance:glance_dbpass@controller/glance
[keystone_authtoken]
identity_uri = http://controller:35357
auth_uri = http://controller:5000/v2.0
admin_tenant_name = service
admin_user = glance
admin_password = glance_pass
[paste_deploy]
flavor=keystone
8. glance 테이블 생성
9. init script 등록
$ sudo vi /etc/init/glance-api.conf
description "Glance API server"
author "Soren Hansen <soren@linux2go.dk>"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "glance-api" stack
$ sudo service glance-api start
$ sudo vi /etc/init/glance-registry.conf
description "Glance registry server"
author "Soren Hansen <soren@linux2go.dk>"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "glance-registry" stack
10. glance client package 설치
$ git clone http://git.openstack.org/openstack/python-glanceclient.git
$ cd python-glanceclient
$ git checkout -b 0.15.0 tags/0.15.0
$ sudo pip install -e .
11. Image 등록
$ wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
$ glance image-create --name cirros-0.3.3 --is-public true --container-format bare --disk-format qcow2 --file cirros-0.3.3-x86_64-disk.img
# Heat 이미지 등록 (from devstack/files)
$ glance image-create --name [Heat]F17-x86_64-cfntools --is-public true --container-format bare --disk-format qcow2 --file F17-x86_64-cfntools.qcow2
# Fedora 이미지 등록 (from devstack/files)
$ glance image-create --name Fedora-x86_64-20-20140618-sda --is-public true --container-format bare --disk-format qcow2 --file Fedora-x86_64-20-20140618-sda.qcow2
# mysql 이미지 등록 (from devstack/files
$ glance image-create --name mysql --is-public true --container-format bare --disk-format qcow2 --file mysql.qcow2
[ Cinder 설치 ]
1. Cinder package 설치
$ git clone http://git.openstack.org/openstack/cinder.git
$ cd cinder
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install -e .
2. DB 등록
$ mysql -uroot -pmysql
mysql> CREATE DATABASE cinder;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_dbpass';
3. service 등록
$ keystone user-create --name=cinder --pass=cinder_pass --email=cinder@example.com
$ keystone user-role-add --user=cinder --tenant=service --role=admin
$ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
$ keystone endpoint-create --service=cinder --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s
$ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
$ keystone endpoint-create --service=cinderv2 --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)
4. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/cinder
$ sudo chown -R stack.stack /var/log/cinder
$ sudo mkdir -p /etc/cinder
$ sudo cp -R ~/Git/cinder/etc/cinder/* /etc/cinder/.
5. conf owner 변경
$ sudo chown -R stack.stack /etc/cinder
$ mv /etc/cinder/cinder.conf.sample /etc/cinder/cinder.conf
$ sudo chown root.root /etc/cinder/rootwrap.conf # root 권한 필요
$ sudo chown -R root.root /etc/cinder/rootwrap.d # root 권한 필요
$ sudo mkdir -p /var/lib/cinder
$ sudo chown stack.stack /var/lib/cinder
$ mkdir -p /var/lib/cinder/volumes
$ sudo mkdir -p /var/lock/cinder
$ sudo chown stack.stack /var/lock/cinder
$ sudo mkdir -p /var/run/cinder
$ sudo chown stack.stack /var/run/cinder
6. cinder.conf 설정
$ vi /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend=cinder.openstack.common.rpc.impl_kombu
rabbit_host=controller
rabbit_password=rabbit
api_paste_config=api-paste.ini
state_path=/var/lib/cinder
glance_host=controller
lock_path=/var/lock/cinder
debug=True
verbose=True
rootwrap_config=/etc/cinder/rootwrap.conf
auth_strategy=keystone
volume_name_template=volume-%s
iscsi_helper=tgtadm
volumes_dir=$state_path/volumes
# volume_group=cinder-volumes # volue-type 에 넣었으므로 제거
enabled_backends=lvm-iscsi-driver
default_volume_type=lvm-iscsi-type
[lvm-iscsi-driver]
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
san_ip=controller
volume_backend_name=lvm-iscsi
[database]
connection = mysql://cinder:cinder_dbpass@controller/cinder
[keystone_authtoken]
auth_host=controller
auth_port=35357
auth_protocol=http
auth_uri=http://controller:5000
admin_user=cinder
admin_password=cinder_pass
admin_tenant_name=service
7. cinder 테이블 생성
8. volume 생성
$ mkdir -p ~/cinder-volumes
$ cd cinder-volumes
$ dd if=/dev/zero of=cinder-volumes-backing-file bs=1 count=0 seek=5G
$ sudo losetup /dev/loop1 /home/stack/cinder-volumes/cinder-volumes-backing-file
$ sudo fdisk /dev/loop1
n p 1 Enter Enter t 8e w
$ sudo pvcreate /dev/loop1
$ sudo vgcreate cinder-volumes /dev/loop1
9. init script 등록
$ sudo vi /etc/init/cinder-api.conf
description "Cinder api server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-api.log" stack
$ sudo service cinder-api start
$ sudo vi /etc/init/cinder-scheduler.conf
description "Cinder scheduler server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-scheduler.log" stack
$ sudo service cinder-scheduler start
$ sudo vi /etc/init/cinder-volume.conf
description "Cinder volume server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-volume.log" stack
$ sudo service cinder-volume start
10. volume type 등록
$ cinder type-create lvm-iscsi-type
$ stack@controller:~/cert$ cinder type-key lvm-iscsi-type set volume_backend_name=lvm-iscsi
11. volume 생성
$ cinder create --display-name test01 --volume-type lvm-iscsi-type 1
[ Nova Controller 설치 ]
1. Nova package 설치
$ git clone http://git.openstack.org/openstack/nova.git
$ cd nova
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install -e .
$ git clone https://github.com/kanaka/novnc.git
$ sudo cp -R novnc /usr/share/novnc
2. DB 등록
$ mysql -uroot -pmysql
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';
3. service 등록
$ keystone user-create --name=nova --pass=nova_pass --email=nova@example.com
$ keystone user-role-add --user=nova --tenant=service --role=admin
$ keystone service-create --name=nova --type=compute --description="OpenStack Compute"
$ keystone endpoint-create --service=nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s
4. conf 파일 generation
$ cd ~/Git/nova
$ sudo tox -i http://xxx.xxx.xxx.xxx/pypi/web/simple -egenconfig # pypi 서버 ip
$ sudo chown stack.stack /home/stack/Git/nova/etc/nova/nova.conf.sample
5. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/nova
$ sudo chown -R stack.stack /var/log/nova
$ sudo mkdir -p /etc/nova
$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.
6. conf owner 변경
$ sudo chown -R stack.stack /etc/nova
$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf
$ mv /etc/nova/logging_sample.conf logging.conf
$ sudo chown root.root /etc/nova/rootwrap.conf # root 권한 필요
$ sudo chown -R root.root /etc/nova/rootwrap.d # root 권한 필요
$ sudo mkdir -p /var/lib/nova
$ sudo chown stack.stack /var/lib/nova
$ sudo mkdir -p /var/lock/nova
$ sudo chown stack.stack /var/lock/nova
$ sudo mkdir -p /var/run/nova
$ sudo chown stack.stack /var/run/nova
7. nova conf 설정
$ vi /etc/nova/nova.conf
[DEFAULT]
rabbit_host=controller
rabbit_password=rabbit
rpc_backend=rabbit
my_ip=192.168.230.151
state_path=/var/lib/nova
rootwrap_config=/etc/nova/rootwrap.conf
api_paste_config=api-paste.ini
auth_strategy=keystone
allow_resize_to_same_host=true
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
force_dhcp_release=true
security_group_api=neutron
lock_path=/var/lock/nova
debug=true
verbose=true
log_dir=/var/log/nova
compute_driver=libvirt.LibvirtDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vncserver_listen=192.168.230.151
vncserver_proxyclient_address=192.168.230.151
[cinder]
catalog_info=volume:cinder:publicURL
[database]
connection = mysql://nova:nova_dbpass@controller/nova
[glance]
host=controller
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass
[libvirt]
use_virtio_for_bridges=true
virt_type=kvm
[neutron]
service_metadata_proxy=True
metadata_proxy_shared_secret=openstack
url=http://192.168.230.151:9696
admin_username=neutron
admin_password=neutron_pass
admin_tenant_name=service
admin_auth_url=http://controller:5000/v2.0
auth_strategy=keystone
8. nova 테이블 생성
9. init script 등록
$ sudo vi /etc/init/nova-api.conf
description "Nova api server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log" stack
$ sudo service nova-api start
$ sudo vi /etc/init/nova-scheduler.conf
description "Nova scheduler server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-scheduler --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack
$ sudo service nova-scheduler start
$ sudo vi /etc/init/nova-conductor.conf
description "Nova conductor server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack
$ sudo service nova-conductor start
$ sudo vi /etc/init/nova-consoleauth.conf
description "Nova consoleauth server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-consoleauth --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-consoleauth.log" stack
$ sudo service nova-consoleauth start
$ sudo vi /etc/init/nova-console.conf
description "Nova console server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-console --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-console.log" stack
$ sudo service nova-console start
$ sudo vi /etc/init/nova-cert.conf
description "Nova cert server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-cert --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-cert.log" stack
$ sudo service nova-cert start
$ sudo vi /etc/init/nova-novncproxy.conf
description "Nova novncproxy server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-novncproxy --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-novncproxy.log" stack
$ sudo service nova-novncproxy start
[ Neutron Controller 설치 ]
1. Neutron package 설치
$ git clone http://git.openstack.org/openstack/neutron.git
$ cd neutron
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install -e .
2. DB 등록
$ mysql -uroot -pmysql
mysql> CREATE DATABASE neutron;
mysql> GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_dbpass';
3. service 등록
keystone user-create --name=neutron --pass=neutron_pass --email=neutron@example.com
keystone service-create --name=neutron --type=network --description="OpenStack Networking"
keystone user-role-add --user=neutron --tenant=service --role=admin
keystone endpoint-create --service=neutron --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696
4. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/neutron
$ sudo chown -R stack.stack /var/log/neutron
$ sudo mkdir -p /etc/neutron
$ sudo mkdir -p /etc/neutron/plugins
$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.
$ sudo cp -R ~/Git/neutron/etc/neutron/plugins/ml2 /etc/neutron/plugins/.
$ sudo cp -R ~/Git/neutron/etc/neutron/rootwrap.d/ /etc/neutron/.
5. conf owner 변경
$ sudo chown -R stack.stack /etc/neutron
$ sudo chown root.root /etc/neutron/rootwrap.conf # root 권한 필요
$ sudo chown -R root.root /etc/neutron/rootwrap.d
$ sudo mkdir -p /var/lib/neutron
$ sudo chown stack.stack /var/lib/neutron
$ sudo mkdir -p /var/run/neutron
$ sudo chown stack.stack /var/run/neutron
6. neutron conf 설정
$ vi /etc/neutron/neutron.conf
[DEFAULT]
router_distributed = True
verbose = True
debug = True
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = 86be.......... # 이름이 아니라 tenant-id를 넣어야 함
nova_admin_password = nova_pass
nova_admin_auth_url = http://controller:35357/v2.0
rabbit_host=controller
rabbit_password=rabbit
notification_driver=neutron.openstack.common.notifier.rpc_notifier
rpc_backend=rabbit
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
[database]
connection = mysql://neutron:neutron_dbpass@controller/neutron
7. ml2 conf 설정
$ vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population
[ml2_type_vxlan]
vni_ranges = 1001:2000
vxlan_group = 239.1.1.1
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
enable_distributed_routing = True
tunnel_types = vxlan
l2_population = True
[ovs]
local_ip = 192.168.200.151
tunnel_types = vxlan
tunnel_id_ranges = 1001:2000
enable_tunneling = True
bridge_mappings = external:br-ex
8. Neutron 테이블 생성
$ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno
9. init script 등록
$ sudo vi /etc/init/neutron-server.conf
# vim:set ft=upstart ts=2 et:
description "Neutron API Server"
author "Chuck Short <zulcss@ubuntu.com>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
chdir /var/run
script
[ -r /etc/default/neutron-server ] && . /etc/default/neutron-server
exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \
--config-file /etc/neutron/neutron.conf \
--config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \
--log-file /var/log/neutron/neutron-server.log $CONF_ARG
end script
Neutron Server 수동 시작
$ neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log
10. 명령어 확인
11. Neutron Service Restart
$ vi service-neutron.sh
#!/bin/bash
sudo service neutron-server $1
$ chmod 755 service-neutron.sh
$ ./service-neutron.sh restart
############### Network ######################
[ Neutron Network Node 설치 ]
1. Neutron package 설치
$ git clone http://git.openstack.org/openstack/neutron.git
$ cd neutron
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install pbr==0.9 # pbr 은 버전설치에 문제가 있어 따로 설치
$ sudo pip install -e .
$ sudo apt-get install dnsmasq
2. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/neutron
$ sudo chown -R stack.stack /var/log/neutron
$ sudo mkdir -p /etc/neutron
$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.
$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.
3. conf owner 변경
$ sudo chown -R stack.stack /etc/neutron
$ sudo chown root.root /etc/neutron/rootwrap.conf # root 권한 필요
$ sudo chown -R root.root /etc/neutron/rootwrap.d
$ sudo mkdir -p /var/lib/neutron
$ sudo chown stack.stack /var/lib/neutron
$ sudo mkdir -p /var/run/neutron
$ sudo chown stack.stack /var/run/neutron
4. neutron conf 설정
$ vi /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
debug = True
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = service
nova_admin_password = nova_pass
nova_admin_auth_url = http://controller:35357/v2.0
rabbit_host=controller
rabbit_password=rabbit
notification_driver=neutron.openstack.common.notifier.rpc_notifier
rpc_backend=rabbit
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
[database]
connection = mysql://neutron:neutron_dbpass@controller/neutron
5. ml2 conf 설정
$ vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population
[ml2_type_vxlan]
vni_ranges = 1001:2000
vxlan_group = 239.1.1.1
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
enable_distributed_routing = True
tunnel_types = vxlan
l2_population = True
[ovs]
local_ip = 192.168.200.152
tunnel_types = vxlan
tunnel_id_ranges = 1001:2000
enable_tunneling = True
bridge_mappings = external:br-ex
6. L3 agent conf 설정
$ vi /etc/neutron/l3_agent.ini
[DEFAULT]
debug = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
router_delete_namespaces = True
agent_mode = dvr_snat
7. DHCP agent conf 설정
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = True
enable_metadata_network = True
dhcp_delete_namespaces = True
verbose = True
8. metadata agent conf 설정
$ vi /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_url = http://controller:5000/v2.0
auth_region = regionOne # RegionOne 으로 쓰면 에러
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
nova_metadata_ip = controller
metadata_proxy_shared_secret = openstack
verbose = True
9. Bridge 및 port 생성
$ sudo ovs-vsctl add-br br-ex
$ sudo ovs-vsctl add-port br-ex eth0
$ sudo ovs-vsctl add-br br-tun
$ sudo ovs-vsctl add-port br-tun eth2
10. init script 등록
$ sudo vi /etc/init/neutron-openvswitch-agent.conf
description "Neutron OpenVSwitch Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack
$ sudo service neutron-openvswitch-agent start
$ sudo vi /etc/init/neutron-l3-agent.conf
description "Neutron L3 Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
pre-start script
# Check to see if openvswitch plugin in use by checking
# status of cleanup upstart configuration
if status neutron-ovs-cleanup; then
start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent
fi
end script
exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack
$ sudo service neutron-l3-agent start
$ sudo vi /etc/init/neutron-dhcp-agent.conf
description "Neutron dhcp Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "neutron-dhcp-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/neutron-dhcp-agent.log" stack
$ sudo service neutron-dhcp-agent start
$ sudo vi /etc/init/neutron-metadata-agent.conf
description "Neutron metadata Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack
$ sudo service neutron-metadata-agent start
11. 설치 확인
12. Neutron Service Restart
$ vi service-neutron.sh
#!/bin/bash
sudo service neutron-openvswitch-agent $1
sudo service neutron-dhcp-agent $1
sudo service neutron-metadata-agent $1
sudo service neutron-l3-agent $1
$ chmod 755 service-neutron.sh
$ ./service-neutron.sh restart
############### Compute ######################
[ Nova Compute Node 설치 ]
1. Nova package 설치
$ git clone http://git.openstack.org/openstack/nova.git
$ cd nova
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install pbr==0.9 # pbr 은 버전설치에 문제가 있어 따로 설치
$ sudo pip install -e .
2. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/nova
$ sudo chown -R stack.stack /var/log/nova
$ sudo mkdir -p /etc/nova
$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.
3. conf owner 변경
$ sudo chown -R stack.stack /etc/nova
$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf
$ mv /etc/nova/logging_sample.conf logging.conf
$ sudo chown root.root /etc/nova/rootwrap.conf # root 권한 필요
$ sudo chown -R root.root /etc/nova/rootwrap.d # root 권한 필요
$ sudo mkdir -p /var/lib/nova
$ sudo chown stack.stack /var/lib/nova
$ sudo mkdir -p /var/lib/nova/instances
$ sudo chown stack.stack /var/lib/nova/instances
$ sudo mkdir -p /var/lock/nova
$ sudo chown stack.stack /var/lock/nova
$ sudo mkdir -p /var/run/nova
$ sudo chown stack.stack /var/run/nova
nova.conf, logging.conf 복사 (Controller node 에서 수행)
$ scp /etc/nova/logging.conf nova.conf stack@compute:/etc/nova/.
4. nova conf 설정
$ vi /etc/nova/nova.conf
[DEFAULT]
rabbit_host=controller
rabbit_password=rabbit
rpc_backend=rabbit
my_ip=192.168.230.153
state_path=/var/lib/nova
rootwrap_config=/etc/nova/rootwrap.conf
api_paste_config=api-paste.ini
auth_strategy=keystone
allow_resize_to_same_host=true
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
force_dhcp_release=true
security_group_api=neutron
lock_path=/var/lock/nova
debug=true
verbose=true
log_dir=/var/log/nova
compute_driver=libvirt.LibvirtDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_base_url=http://controller:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=controller
[cinder]
catalog_info=volume:cinder:publicURL
[database]
connection = mysql://nova:nova_dbpass@controller/nova
[glance]
host=controller
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass
[libvirt]
use_virtio_for_bridges=true
virt_type=kvm
[neutron]
metadata_proxy_shared_secret=openstack
url=http://192.168.230.151:9696
admin_username=neutron
admin_password=neutron_pass
admin_tenant_name=service
admin_auth_url=http://controller:5000/v2.0
auth_strategy=keystone
5. init script 등록
$ sudo vi /etc/init/nova-compute.conf
description "Nova compute server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "nova-compute --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-compute.log" stack
$ sudo service nova-compute start
[ Neutron Compute 설치 ]
1. Neutron package 설치
$ git clone http://git.openstack.org/openstack/neutron.git
$ cd neutron
$ git checkout -b 2014.2.1 tags/2014.2.1
$ sudo pip install -e .
2. conf 및 log 디렉토리 생성
$ sudo mkdir -p /var/log/neutron
$ sudo chown -R stack.stack /var/log/neutron
$ sudo mkdir -p /etc/neutron
$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.
$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.
3. conf owner 변경
$ sudo chown -R stack.stack /etc/neutron
$ sudo chown root.root /etc/neutron/rootwrap.conf # root 권한 필요
$ sudo chown -R root.root /etc/neutron/rootwrap.d
$ sudo mkdir -p /var/lib/neutron
$ sudo chown stack.stack /var/lib/neutron
$ sudo mkdir -p /var/run/neutron
$ sudo chown stack.stack /var/run/neutron
etc 파일을 복사 (Network Node 로 부터)
$ scp /etc/neutron/* stack@compute:/etc/neutron/.
$ scp /etc/neutron/plugins/ml2/ml2_conf.ini stack@compute:/etc/neutron/plugins/ml2/.
4. neutron conf 설정
$ vi /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
debug = True
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = service
nova_admin_password = nova_pass
nova_admin_auth_url = http://controller:35357/v2.0
rabbit_host=controller
rabbit_password=rabbit
notification_driver=neutron.openstack.common.notifier.rpc_notifier
rpc_backend=rabbit
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
[database]
connection = mysql://neutron:neutron_dbpass@controller/neutron
5. ml2 conf 설정
$ vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population
[ml2_type_vxlan]
vni_ranges = 1001:2000
vxlan_group = 239.1.1.1
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent] # Compute Node 에 추가
enable_distributed_routing = True
tunnel_types = vxlan
l2_population = True
[ovs] #Compute Node 에 추가
local_ip = 192.168.200.153
tunnel_types = vxlan
tunnel_id_ranges = 1001:2000
enable_tunneling = True
bridge_mappings = external:br-ex
6. L3 agent conf 설정
$ vi /etc/neutron/l3_agent.ini
[DEFAULT]
debug = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
router_delete_namespaces = True
agent_mode = dvr
7. DHCP agent conf 설정
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = True
enable_metadata_network = True
dhcp_delete_namespaces = True
verbose = True
8. metadata agent conf 설정
$ vi /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_url = http://controller:5000/v2.0
auth_region = regionOne # RegionOne 으로 쓰면 에러
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
nova_metadata_ip = controller
metadata_proxy_shared_secret = openstack
verbose = True
$ keystone endpoint-list # Region 을 확인 후에 설정
9. Bridge 및 port 생성
$ sudo ovs-vsctl add-br br-ex
$ sudo ovs-vsctl add-port br-ex eth0
$ sudo ovs-vsctl add-br br-tun
$ sudo ovs-vsctl add-port br-tun eth2
10. init script 등록
$ sudo vi /etc/init/neutron-openvswitch-agent.conf
description "Neutron OpenVSwitch Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack
$ sudo service neutron-openvswitch-agent start
$ sudo vi /etc/init/neutron-l3-agent.conf
description "Neutron L3 Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
pre-start script
# Check to see if openvswitch plugin in use by checking
# status of cleanup upstart configuration
if status neutron-ovs-cleanup; then
start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent
fi
end script
exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack
$ sudo service neutron-l3-agent start
$ sudo vi /etc/init/neutron-metadata-agent.conf
description "Neutron metadata Agent server"
author "somebody"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]
respawn
exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack
$ sudo service neutron-metadata-agent start
11. Neutron Service Restart
$ vi service-neutron.sh
#!/bin/bash
sudo service neutron-openvswitch-agent $1
sudo service neutron-metadata-agent $1
sudo service neutron-l3-agent $1
$ chmod 755 service-neutron.sh
$ ./service-neutron.sh restart
External Network 생성
$ neutron net-create ext-net --router:external True --provider:physical_network external --provider:network_type flat
$ neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.75.193,end=192.168.75.254 --disable-dhcp --gateway 192.168.75.2 192.168.75.0/24
Internal Network 생성
$ neutron net-create demo-net --provider:network_type vxlan
$ neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.1/24
Router 생성
$ neutron router-create demo-router
$ neutron router-interface-add demo-router demo-subnet
$ neutron router-gateway-set demo-router ext-net
Security rule 등록
$ neutron security-group-rule-create --protocol icmp --direction ingress default
$ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default
MTU 세팅
1. /etc/network/interfaces 파일에 세팅
auto eth2
iface eth2 inet static
address 192.168.200.152
netmask 255.255.255.0
mtu 9000
$ sudo ifdown eth2
$ sudo ifup eth2
2. 동적으로 세팅 (리부팅 필요)
$ ifconfig eth2 mtu 9000
$ reboot
route gw 등록
$ sudo route add -net "0.0.0.0/0" gw "10.0.0.1"
VM 생성
$ nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e
floating ip 추가
$ neutron floatingip-create ext-net
$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]
metadata 호출
cirros vm 안에서
$ wget http://169.254.169.254/latest/meta-data/instance-id
Controller 노드에서 metadata 바로 호출하기
$ curl \
-H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \
-H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \
-H 'x-instance-id-signature: \
80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \
http://192.168.230.230:8775/latest/meta-data
[ x-instance-id-signature 구하기 ]
>>> import hmac
>>> import hashlib
>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()
'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'
>>>
코멘트 제외하고 설정정보 보기
$ cat /etc/nova/nova.conf | grep -v ^# | grep -v ^$
vxlan 연결 여부 확인
$ sudo ovs-ofctl show br-tun
net 을 제거하는 순서
1. router 와 subnet 의 인터페이스 제거
$ neutron router-interface-delete [router-id] [subnet-id]
2. subnet 삭제
$ neutron subnet-delete [subnet-id]
3. net 삭제
$ neutron net-delete [net-id]
pip 으로 설치할 수 있게 배포판 만들기
$ sudo python setup.py sdist --formats=gztar