반응형

[ Neutron Server 시작 ]

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log


[ nova boot 명령어 ]

nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e


[ vxlan 연결 여부 확인 ]

sudo ovs-ofctl show br-tun


[ controller node 의 neutron.conf 세팅 ]

nova_admin_tenant_id = service    # 이름이 아니라 tenant-id 를 넣어야 함



[ net 을 제거하는 순서 ]

1. router 와 subnet 의 인터페이스 제거

neutron router-interface-delete [router-id] [subnet-id]


2. subnet 삭제

neutron subnet-delete [subnet-id]


3. net 삭제

neutron net-delete [net-id]



[ net 을 생성할 때 vxlan 으로 생성 ]

neutron net-create demo-net --provider:network_type vxlan



[ security rule 등록 ]

neutron security-group-rule-create --protocol icmp --direction ingress default

neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default



[ route gw 등록 ]

route add -net "0.0.0.0/0" gw "10.0.0.1"



[ MTU 세팅 ]

1. /etc/network/interfaces 파일에 세팅

auto eth2

iface eth2 inet static

address 192.168.200.152

netmask 255.255.255.0

mtu 9000


$ sudo ifdown eth2

$ sudo ifup eth2


2. 동적으로 세팅 (리부팅 필요)

ifconfig eth2 mtu 9000

reboot



[ floatingip 추가 ]

$ neutron floatingip-create ext-net

$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]



[ pip 으로 설치할 수 있게 배포판 만들기 ]

$ sudo python setup.py sdist --formats=gztar



[ metadata 서비스 ]

1. metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨.

   compute-node 는 필요없음

   compute 의 vm 이 network node 의 qdhcp 를 gateway로 보고 호출함


2. controller 노드의 /etc/nova/nova.conf 수정

[neutron]

service_metadata_proxy=True


3. network 노드와 compute 노드의 /etc/neutron/metadata_agent.ini 수정

auth_region = regionOne   # RegionOne 으로 쓰면 에러


[ cirros vm 안에서]

$ wget http://169.254.169.254/latest/meta-data/instance-id




cat /etc/nova/nova.conf  | grep -v ^# | grep -v ^$ | grep metadata

cat /etc/neutron/metadata_agent.ini | grep -v ^# | grep -v ^$ | grep metadata

cat /etc/neutron/l3_agent.ini | grep -v ^# | grep -v ^$



[ controller 노드에서 metadata 바로 호출하기 ]

curl \

  -H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \

  -H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \

  -H 'x-instance-id-signature: 80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \

  http://localhost:8775/latest/meta-data


[ x-instance-id-signature 구하기 ]

>>> import hmac

>>> import hashlib

>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()

'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'

>>>


[ neutron server init script ]

1. /etc/init.d/neutron-server 에 파일 복사한 것 삭제


2. sudo vi /etc/init/neutron-server.conf


# vim:set ft=upstart ts=2 et:

description "Neutron API Server"

author "Chuck Short <zulcss@ubuntu.com>"


start on runlevel [2345]

stop on runlevel [!2345]


respawn


chdir /var/run


script

  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server

  exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \

    --config-file /etc/neutron/neutron.conf \

    --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \

    --log-file /var/log/neutron/neutron-server.log $CONF_ARG

end script









반응형
Posted by seungkyua@gmail.com
,
반응형

[ network 구조 ]

eth0  : NAT (Public Network)

eth1  : Host-only (Private Management Network)

eth2  : Host-only (Private Data Network)


controller   : eth0 - 192.168.75.151    eth1 - 192.168.230.151

network     : eth0 - 192.168.75.152    eth1 - 192.168.230.152     eth2 - 192.168.200.152

Compute   : eth0 - 192.168.75.153    eth1 - 192.168.230.153     eth2 - 192.168.200.153


0. Kernel 버전

3.13.0-24-generic 에서 3.13.0.34-generic 으로 업그레이드 되어야 함


1. Host 이름 변경

$ sudo vi /etc/hostname

...

controller

...

$ sudo hostname -F /etc/hostname


$ sudo vi /etc/hosts

...

192.168.230.151 controller

192.168.230.152 network

192.168.230.153 compute


2. ntp 및 로컬타임 세팅

$ sudo apt-get install ntp

$ sudo vi /etc/ntp.conf

...

server time.bora.net

...

$ sudo ntpdate -u time.bora.net

$ sudo ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime

$ sudo service ntp restart


3. User 생성 및 sudo 세팅

# adduser stack

# visudo

...

stack   ALL=(ALL:ALL) NOPASSWD: ALL           # 맨 마지막 줄에 추가


4. ip forward 및 ip spoofing 세팅

sudo vi /etc/sysctl.conf

...

net.ipv4.conf.default.rp_filter=0         # ip spoofing 1 이 막는건가?

net.ipv4.conf.all.rp_filter=0               # ip spoofing 1 이 막는건가?

net.ipv4.ip_forward=1

...

$ sudo sysctl -p


5. 공통 패키지 설치

- 파이썬 pip 라이브러리

- 파이썬 개발 라이브러리

- 파이썬 eventlet 개발 라이브러리

- 파이썬 mysql 라이브러리

- vlan 및 bridge

- lvm                               (Cinder를 위해서)

- OpenVSwtich

- 파이썬 libvirt 라이브러리   (KVM 컨트롤 위해서)

- nbd 커널모듈 로드           (VM disk mount 를 위해서)

- ipset                          (ovs 성능 향상을 위해 ml2 에서 enable_ipset=True 일 때 사용)


sudo apt-get install python-pip

sudo apt-get install python-dev

$ sudo apt-get install libevent-dev

$ sudo apt-get install python-mysqldb

sudo apt-get install vlan bridge-utils

sudo apt-get install lvm2

$ sudo apt-get install openvswitch-switch

$ sudo apt-get install python-libvirt

$ sudo apt-get install nbd-client

$ sudo apt-get install ipset


$ sudo apt-get install python-tox            tox : nova.conf 를 generate 하기위한 툴

$ sudo apt-get install libmysqlclient-dev   tox 로 generate 할 때 mysql config 파일이 필요

$ sudo apt-get install libpq-dev              # tox 로 generate 할 때 pq config 파일이 필요

sudo apt-get install libxml2-dev           # tox 로 generate 할 때 xml parsing 필요

sudo apt-get install libxslt1-dev           tox 로 generate 할 때 xml parsing 필요

sudo apt-get install libvirt-dev             # tox 로 generate 할 때 필요

sudo apt-get install libffi-dev              # tox 로 generate 할 때 필요



[ 서버별 Process 및 Package ]


1. Controller Node 에 뜨는 Process

nova-api

nova-scheduler

nova-conductor

nova-consoleauth

nova-console

nova-novncproxy

nova-cert


neutron-server


2. Network Node Node 에 뜨는 Process

   metadata 서비스 : metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨

neutron-l3-agent

neutron-dhcp-agent

neutron-openvswitch-agent

neutron-metadata-agent          # metadata 서비스를 위해서 Network Node 에 필요

neutron-ns-metadata-proxy     # vm 이 network node 의 qdhcp 를 gateway로 보고 호출함 


3. Compute Node 에 뜨는 Process

nova-compute


neutron-l3-agent

neutron-openvswitch-agent



1. Controller Node 에 설치할 Neutron Package

neutron-server

neutron-plugin-ml2


2. Network Node 에 설치할 Neutron Package

neutron-plugin-ml2

neutron-plugin-openvswitch-agent

neutron-l3-agent   (DVR)

neutron-dhcp-agent


3. Compute Node 에 설치할 Neutron Package

neutron-common

neutron-plugin-ml2

neutron-plugin-openvswitch-agent

neutron-l3-agent   (DVR)



###############   controller   ######################


[ RabbitMQ 설치 ]

$ sudo  apt-get install rabbitmq-server

sudo rabbitmqctl change_password guest rabbit


[ MySQL 설치 ]

sudo apt-get install mysql-server python-mysqldb

$ sudo vi /etc/mysql/my.cnf

...

bind-address        = 0.0.0.0

...

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

character_set_filesystem = utf8

...

$ sudo service mysql restart


[ Keystone 설치 ]


1. Keystone package 설치

$ mkdir -p Git

$ cd Git

$ git clone http://git.openstack.org/openstack/keystone.git

$ cd keystone

$ git checkout -b 2014.2.1 tags/2014.2.1


sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .                        # source를 pip 으로 install 하기


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_dbpass';


3. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/keystone

$ sudo chown -R stack.stack /var/log/keystone

sudo mkdir -p /etc/keystone

$ sudo cp ~/Git/keystone/etc/* /etc/keystone/.


$ sudo vi /etc/logrotate.d/openstack

/var/log/keystone/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/nova/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/cinder/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/glance/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/neutron/*.log {

    daily

    rotate 31

    missingok

    dateext

}


4. conf 복사

sudo chown -R stack.stack /etc/keystone

$ cd /etc/keystone

mv keystone.conf.sample keystone.conf

$ mv logging.conf.sample logging.conf

$ mkdir -p ssl

$ cp -R ~/Git/keystone/examples/pki/certs /etc/keystone/ssl/.

$ cp -R ~/Git/keystone/examples/pki/private /etc/keystone/ssl/.


5. conf 설정

$ sudo vi keystone.conf


[DEFAULT]

admin_token=ADMIN

admin_workers=2

max_token_size=16384

debug=True

logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s% (message)s

logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s

rabbit_host=controller

rabbit_password=rabbit

log_file=keystone.log

log_dir=/var/log/keystone


[catalog]

 driver=keystone.catalog.backends.sql.Catalog


[database]

connection=mysql://keystone:keystone_dbpass@controller/keystone


[identity]

driver=keystone.identity.backends.sql.Identity


[paste_deploy]

config_file=/etc/keystone/keystone-paste.ini


[token]

expiration=7200

driver=keystone.token.persistence.backends.sql.Token


6. keystone 테이블 생성

keystone-manage db_sync


7. init script 등록

$ sudo vi /etc/init/keystone.conf


description "Keystone server"

author "somebody"


start on (filesystem and net-device-up IFACE!=lo)

stop on runlevel [016]


chdir /var/run


exec su -c "keystone-all" stack


$ sudo service keystone start


8. 초기 키스톤 명령을 위한 initrc 생성

$ vi initrc


export OS_SERVICE_TOKEN=ADMIN

export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0


9. tenant, user, role 등록

$ . initrc

keystone tenant-create --name=admin --description="Admin Tenant"

$ keystone tenant-create --name=service --description="Service Tenant"

$ keystone user-create --name=admin --pass=ADMIN --email=admin@example.com

$ keystone role-create --name=admin

$ keystone user-role-add --user=admin --tenant=admin --role=admin


10. Service 등록

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"


11. endpoint 등록

keystone endpoint-create --service=keystone --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0


12. adminrc 생성

unset OS_SERVICE_TOKEN

$ unset OS_SERVICE_ENDPOINT

$ vi adminrc

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN

export OS_TENANT_NAME=admin

export OS_AUTH_URL=http://controller:35357/v2.0


13. keystone conf 파일 리스트

stack@controller:/etc/keystone$ ll

total 104

drwxr-xr-x   3 stack stack  4096 Jan  7 15:53 ./

drwxr-xr-x 137 root  root  12288 Jan  7 17:23 ../

-rw-r--r--   1 stack stack  1504 Jan  7 11:16 default_catalog.templates

-rw-r--r--   1 stack stack 47749 Jan  7 11:51 keystone.conf

-rw-r--r--   1 stack stack  4112 Jan  7 11:16 keystone-paste.ini

-rw-r--r--   1 stack stack  1046 Jan  7 11:16 logging.conf

-rw-r--r--   1 stack stack  8051 Jan  7 11:16 policy.json

-rw-r--r--   1 stack stack 10676 Jan  7 11:16 policy.v3cloudsample.json

drwxrwxr-x   4 stack stack  4096 Jan  7 11:55 ssl/

stack@controller:/etc/keystone$ cd ssl

stack@controller:/etc/keystone/ssl$ ll

total 16

drwxrwxr-x 4 stack stack 4096 Jan  7 11:55 ./

drwxr-xr-x 3 stack stack 4096 Jan  7 15:53 ../

drwxrwxr-x 2 stack stack 4096 Jan  7 11:54 certs/

drwxrwxr-x 2 stack stack 4096 Jan  7 11:55 private/



[ Glance 설치 ]


1. Glance package 설치

git clone http://git.openstack.org/openstack/glance.git

$ cd glance

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

mysql -uroot -pmysql

mysql> CREATE DATABASE glance;

mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_dbpass';


3. service 등록

keystone user-create --name=glance --pass=glance_pass --email=glance@example.com

$ keystone user-role-add --user=glance --tenant=service --role=admin

$ keystone service-create --name=glance --type=image --description="Glance Image Service"

$ keystone endpoint-create --service=glance --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/glance

$ sudo chown -R stack.stack /var/log/glance

sudo mkdir -p /etc/glance

$ sudo cp ~/Git/glance/etc/glance-* /etc/glance/.

$ sudo cp ~/Git/glance/etc/*.json /etc/glance/.

$ sudo cp ~/Git/glance/etc/logging.cnf.sample /etc/glance/logging.cnf


$ sudo mkdir -p /var/lib/glance

sudo chown stack.stack /var/lib/glance

$ mkdir -p /var/lib/glance/images

$ mkdir -p /var/lib/glance/image-cache


5. conf owner 변경

sudo chown -R stack.stack /etc/glance


6. glance-api.conf 설정

$ vi /etc/glance/glance-api.conf


[DEFAULT]

verbose = True

debug = True

rabbit_host = controller

rabbit_password = rabbit

image_cache_dir = /var/lib/glance/image-cache/

delayed_delete = False

scrub_time = 43200

scrubber_datadir = /var/lib/glance/scrubber


[database]

connection = mysql://glance:glance_dbpass@controller/glance


[keystone_authtoken]

identity_uri = http://controller:35357

auth_uri = http://controller:5000/v2.0

admin_tenant_name = service

admin_user = glance

admin_password = glance_pass


[paste_deploy]

flavor=keystone


[glance_store]

filesystem_store_datadir = /var/lib/glance/images/


7. glance-registry.conf 설정

$ vi /etc/glance/glance-registry.conf


[DEFAULT]

verbose = False

debug = False

rabbit_host = controller

rabbit_password = rabbit


[database]

connection = mysql://glance:glance_dbpass@controller/glance


[keystone_authtoken]

identity_uri = http://controller:35357

auth_uri = http://controller:5000/v2.0

admin_tenant_name = service

admin_user = glance

admin_password = glance_pass


[paste_deploy]

flavor=keystone


8. glance 테이블 생성

glance-manage db_sync


9. init script 등록

$ sudo vi /etc/init/glance-api.conf


description "Glance API server"

author "Soren Hansen <soren@linux2go.dk>"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "glance-api" stack


$ sudo service glance-api start


$ sudo vi /etc/init/glance-registry.conf


description "Glance registry server"

author "Soren Hansen <soren@linux2go.dk>"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "glance-registry" stack


10. glance client package 설치

git clone http://git.openstack.org/openstack/python-glanceclient.git

$ cd python-glanceclient

$ git checkout -b 0.15.0 tags/0.15.0

$ sudo pip install -e .


11. Image 등록

wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

$ glance image-create --name cirros-0.3.3 --is-public true --container-format bare --disk-format qcow2 --file cirros-0.3.3-x86_64-disk.img


# Heat 이미지 등록 (from devstack/files)

$ glance image-create --name [Heat]F17-x86_64-cfntools --is-public true --container-format bare --disk-format qcow2 --file F17-x86_64-cfntools.qcow2


# Fedora 이미지 등록 (from devstack/files)

$ glance image-create --name Fedora-x86_64-20-20140618-sda --is-public true --container-format bare --disk-format qcow2 --file Fedora-x86_64-20-20140618-sda.qcow2


# mysql 이미지 등록 (from devstack/files

glance image-create --name mysql --is-public true --container-format bare --disk-format qcow2 --file mysql.qcow2



[ Cinder 설치 ]


1. Cinder package 설치

git clone http://git.openstack.org/openstack/cinder.git

$ cd cinder

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

mysql -uroot -pmysql

mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_dbpass';


3. service 등록

keystone user-create --name=cinder --pass=cinder_pass --email=cinder@example.com

keystone user-role-add --user=cinder --tenant=service --role=admin

$ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"

$ keystone endpoint-create --service=cinder --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s

$ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"

$ keystone endpoint-create --service=cinderv2 --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/cinder

$ sudo chown -R stack.stack /var/log/cinder

sudo mkdir -p /etc/cinder

$ sudo cp -R ~/Git/cinder/etc/cinder/* /etc/cinder/.


5. conf owner 변경

sudo chown -R stack.stack /etc/cinder

$ mv /etc/cinder/cinder.conf.sample /etc/cinder/cinder.conf

$ sudo chown root.root /etc/cinder/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/cinder/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/cinder

sudo chown stack.stack /var/lib/cinder

$ mkdir -p /var/lib/cinder/volumes

$ sudo mkdir -p /var/lock/cinder

sudo chown stack.stack /var/lock/cinder

sudo mkdir -p /var/run/cinder

sudo chown stack.stack /var/run/cinder


6. cinder.conf 설정

$ vi /etc/cinder/cinder.conf


[DEFAULT]

rpc_backend=cinder.openstack.common.rpc.impl_kombu

rabbit_host=controller

rabbit_password=rabbit

api_paste_config=api-paste.ini

state_path=/var/lib/cinder

glance_host=controller

lock_path=/var/lock/cinder

debug=True

verbose=True

rootwrap_config=/etc/cinder/rootwrap.conf

auth_strategy=keystone

volume_name_template=volume-%s

iscsi_helper=tgtadm

volumes_dir=$state_path/volumes

# volume_group=cinder-volumes               # volue-type 에 넣었으므로 제거


enabled_backends=lvm-iscsi-driver

default_volume_type=lvm-iscsi-type


[lvm-iscsi-driver]

volume_group=cinder-volumes

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

san_ip=controller

volume_backend_name=lvm-iscsi


[database]

connection = mysql://cinder:cinder_dbpass@controller/cinder


[keystone_authtoken]

auth_host=controller

auth_port=35357

auth_protocol=http

auth_uri=http://controller:5000

admin_user=cinder

admin_password=cinder_pass

admin_tenant_name=service


7. cinder 테이블 생성

cinder-manage db sync


8. volume 생성

$ mkdir -p ~/cinder-volumes

$ cd cinder-volumes

dd if=/dev/zero of=cinder-volumes-backing-file bs=1 count=0 seek=5G

$ sudo losetup /dev/loop1 /home/stack/cinder-volumes/cinder-volumes-backing-file

sudo fdisk /dev/loop1

n p 1 Enter Enter t 8e w

sudo pvcreate /dev/loop1

sudo vgcreate cinder-volumes /dev/loop1


9. init script 등록

$ sudo vi /etc/init/cinder-api.conf


description "Cinder api server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-api.log" stack


$ sudo service cinder-api start


$ sudo vi /etc/init/cinder-scheduler.conf


description "Cinder scheduler server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-scheduler.log" stack


$ sudo service cinder-scheduler start


$ sudo vi /etc/init/cinder-volume.conf


description "Cinder volume server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-volume.log" stack


$ sudo service cinder-volume start


10. volume type 등록

cinder type-create lvm-iscsi-type

stack@controller:~/cert$ cinder type-key lvm-iscsi-type set volume_backend_name=lvm-iscsi


11. volume 생성

cinder create --display-name test01 --volume-type lvm-iscsi-type 1




[ Nova Controller 설치 ]


1. Nova package 설치

$ git clone http://git.openstack.org/openstack/nova.git

$ cd nova

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


git clone https://github.com/kanaka/novnc.git

sudo cp -R novnc /usr/share/novnc


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';


3. service 등록

$ keystone user-create --name=nova --pass=nova_pass --email=nova@example.com

$ keystone user-role-add --user=nova --tenant=service --role=admin

$ keystone service-create --name=nova --type=compute --description="OpenStack Compute"

$ keystone endpoint-create --service=nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s


4. conf 파일 generation

$ cd ~/Git/nova

$ sudo tox -i http://xxx.xxx.xxx.xxx/pypi/web/simple -egenconfig          # pypi 서버 ip

$ sudo chown stack.stack /home/stack/Git/nova/etc/nova/nova.conf.sample


5. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/nova

$ sudo chown -R stack.stack /var/log/nova

$ sudo mkdir -p /etc/nova

$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.


6. conf owner 변경

$ sudo chown -R stack.stack /etc/nova

$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf

mv /etc/nova/logging_sample.conf logging.conf

$ sudo chown root.root /etc/nova/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/nova/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/nova

$ sudo chown stack.stack /var/lib/nova

$ sudo mkdir -p /var/lock/nova

$ sudo chown stack.stack /var/lock/nova

$ sudo mkdir -p /var/run/nova

$ sudo chown stack.stack /var/run/nova


7. nova conf 설정

$ vi /etc/nova/nova.conf


[DEFAULT]

rabbit_host=controller

rabbit_password=rabbit

rpc_backend=rabbit

my_ip=192.168.230.151

state_path=/var/lib/nova

rootwrap_config=/etc/nova/rootwrap.conf

api_paste_config=api-paste.ini

auth_strategy=keystone

allow_resize_to_same_host=true

network_api_class=nova.network.neutronv2.api.API

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

force_dhcp_release=true

security_group_api=neutron

lock_path=/var/lock/nova

debug=true

verbose=true

log_dir=/var/log/nova

compute_driver=libvirt.LibvirtDriver

firewall_driver=nova.virt.firewall.NoopFirewallDriver

vncserver_listen=192.168.230.151

vncserver_proxyclient_address=192.168.230.151


[cinder]

catalog_info=volume:cinder:publicURL


[database]

connection = mysql://nova:nova_dbpass@controller/nova


[glance]

host=controller


[keystone_authtoken]

auth_uri=http://controller:5000

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova_pass


[libvirt]

use_virtio_for_bridges=true

virt_type=kvm


[neutron]

service_metadata_proxy=True

metadata_proxy_shared_secret=openstack

url=http://192.168.230.151:9696

admin_username=neutron

admin_password=neutron_pass

admin_tenant_name=service

admin_auth_url=http://controller:5000/v2.0

auth_strategy=keystone


8. nova 테이블 생성

nova-manage db sync


9. init script 등록

$ sudo vi /etc/init/nova-api.conf


description "Nova api server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log" stack


$ sudo service nova-api start


$ sudo vi /etc/init/nova-scheduler.conf


description "Nova scheduler server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-scheduler --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack


$ sudo service nova-scheduler start


$ sudo vi /etc/init/nova-conductor.conf


description "Nova conductor server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack


$ sudo service nova-conductor start


$ sudo vi /etc/init/nova-consoleauth.conf


description "Nova consoleauth server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-consoleauth --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-consoleauth.log" stack


$ sudo service nova-consoleauth start



$ sudo vi /etc/init/nova-console.conf


description "Nova console server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-console --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-console.log" stack


$ sudo service nova-console start


$ sudo vi /etc/init/nova-cert.conf


description "Nova cert server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-cert --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-cert.log" stack


$ sudo service nova-cert start


$ sudo vi /etc/init/nova-novncproxy.conf


description "Nova novncproxy server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-novncproxy --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-novncproxy.log" stack


$ sudo service nova-novncproxy start



[ Neutron Controller 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE neutron;

mysql> GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_dbpass';


3. service 등록

keystone user-create --name=neutron --pass=neutron_pass --email=neutron@example.com

keystone service-create --name=neutron --type=network --description="OpenStack Networking"

keystone user-role-add --user=neutron --tenant=service --role=admin

keystone endpoint-create --service=neutron --publicurl http://controller:9696 --adminurl http://controller:9696  --internalurl http://controller:9696


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo mkdir -p /etc/neutron/plugins

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/plugins/ml2 /etc/neutron/plugins/.

$ sudo cp -R ~/Git/neutron/etc/neutron/rootwrap.d/ /etc/neutron/.


5. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


6. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

router_distributed = True

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = 86be..........       # 이름이 아니라 tenant-id를 넣어야 함

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


7. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]

local_ip = 192.168.200.151

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


8. Neutron 테이블 생성

$ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno


9. init script 등록

sudo vi /etc/init/neutron-server.conf


# vim:set ft=upstart ts=2 et:

description "Neutron API Server"

author "Chuck Short <zulcss@ubuntu.com>"


start on runlevel [2345]

stop on runlevel [!2345]


respawn


chdir /var/run


script

  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server

  exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \

    --config-file /etc/neutron/neutron.conf \

    --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \

    --log-file /var/log/neutron/neutron-server.log $CONF_ARG

end script


Neutron Server 수동 시작

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log


10. 명령어 확인

neutron ext-list


11. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash

sudo service neutron-server $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart



###############   Network   ######################


[ Neutron Network Node 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .


$ sudo apt-get install dnsmasq


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


4. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = service

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


5. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]

local_ip = 192.168.200.152

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


6. L3 agent conf 설정

$ vi /etc/neutron/l3_agent.ini


[DEFAULT]

debug = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

agent_mode = dvr_snat


7. DHCP agent conf 설정

vi /etc/neutron/dhcp_agent.ini


[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

enable_isolated_metadata = True

enable_metadata_network = True

dhcp_delete_namespaces = True

verbose = True


8. metadata agent conf 설정

$ vi /etc/neutron/metadata_agent.ini


[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne                      # RegionOne 으로 쓰면 에러

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass

nova_metadata_ip = controller

metadata_proxy_shared_secret = openstack

verbose = True


9. Bridge 및 port 생성

$ sudo ovs-vsctl add-br br-ex

$ sudo ovs-vsctl add-port br-ex eth0

$ sudo ovs-vsctl add-br br-tun

$ sudo ovs-vsctl add-port br-tun eth2


10. init script 등록

$ sudo vi /etc/init/neutron-openvswitch-agent.conf


description "Neutron OpenVSwitch Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack


$ sudo service neutron-openvswitch-agent start


$ sudo vi /etc/init/neutron-l3-agent.conf


description "Neutron L3 Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


pre-start script

  # Check to see if openvswitch plugin in use by checking

  # status of cleanup upstart configuration

  if status neutron-ovs-cleanup; then

    start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent

  fi

end script


exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack


$ sudo service neutron-l3-agent start


$ sudo vi /etc/init/neutron-dhcp-agent.conf


description "Neutron dhcp Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-dhcp-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/neutron-dhcp-agent.log" stack


$ sudo service neutron-dhcp-agent start


$ sudo vi /etc/init/neutron-metadata-agent.conf


description "Neutron metadata Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack


$ sudo service neutron-metadata-agent start


11. 설치 확인

$ neutron agent-list


12. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash


sudo service neutron-openvswitch-agent $1

sudo service neutron-dhcp-agent $1

sudo service neutron-metadata-agent $1

sudo service neutron-l3-agent $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart




###############   Compute   ######################


[ Nova Compute Node 설치 ]


1. Nova package 설치

$ git clone http://git.openstack.org/openstack/nova.git

$ cd nova

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/nova

$ sudo chown -R stack.stack /var/log/nova

$ sudo mkdir -p /etc/nova

$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/nova

$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf

$ mv /etc/nova/logging_sample.conf logging.conf

$ sudo chown root.root /etc/nova/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/nova/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/nova

$ sudo chown stack.stack /var/lib/nova

$ sudo mkdir -p /var/lib/nova/instances

$ sudo chown stack.stack /var/lib/nova/instances

$ sudo mkdir -p /var/lock/nova

$ sudo chown stack.stack /var/lock/nova

$ sudo mkdir -p /var/run/nova

$ sudo chown stack.stack /var/run/nova


nova.conf, logging.conf 복사 (Controller node 에서 수행)

scp /etc/nova/logging.conf nova.conf stack@compute:/etc/nova/.


4. nova conf 설정

$ vi /etc/nova/nova.conf


[DEFAULT]

rabbit_host=controller

rabbit_password=rabbit

rpc_backend=rabbit

my_ip=192.168.230.153

state_path=/var/lib/nova

rootwrap_config=/etc/nova/rootwrap.conf

api_paste_config=api-paste.ini

auth_strategy=keystone

allow_resize_to_same_host=true

network_api_class=nova.network.neutronv2.api.API

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

force_dhcp_release=true

security_group_api=neutron

lock_path=/var/lock/nova

debug=true

verbose=true

log_dir=/var/log/nova

compute_driver=libvirt.LibvirtDriver

firewall_driver=nova.virt.firewall.NoopFirewallDriver

novncproxy_base_url=http://controller:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=controller


[cinder]

catalog_info=volume:cinder:publicURL


[database]

connection = mysql://nova:nova_dbpass@controller/nova


[glance]

host=controller


[keystone_authtoken]

auth_uri=http://controller:5000

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova_pass


[libvirt]

use_virtio_for_bridges=true

virt_type=kvm


[neutron]

metadata_proxy_shared_secret=openstack

url=http://192.168.230.151:9696

admin_username=neutron

admin_password=neutron_pass

admin_tenant_name=service

admin_auth_url=http://controller:5000/v2.0

auth_strategy=keystone


5. init script 등록

$ sudo vi /etc/init/nova-compute.conf


description "Nova compute server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-compute --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-compute.log" stack


$ sudo service nova-compute start



[ Neutron Compute 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


etc 파일을 복사 (Network Node 로 부터)

scp /etc/neutron/* stack@compute:/etc/neutron/.

$ scp /etc/neutron/plugins/ml2/ml2_conf.ini stack@compute:/etc/neutron/plugins/ml2/.


4. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = service

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


5. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]                                          # Compute Node 에 추가

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]                                             #Compute Node 에 추가

local_ip = 192.168.200.153

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


6. L3 agent conf 설정

$ vi /etc/neutron/l3_agent.ini


[DEFAULT]

debug = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

agent_mode = dvr


7. DHCP agent conf 설정

vi /etc/neutron/dhcp_agent.ini


[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

enable_isolated_metadata = True

enable_metadata_network = True

dhcp_delete_namespaces = True

verbose = True


8. metadata agent conf 설정

$ vi /etc/neutron/metadata_agent.ini


[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne                    # RegionOne 으로 쓰면 에러

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass

nova_metadata_ip = controller

metadata_proxy_shared_secret = openstack

verbose = True


$ keystone endpoint-list                    # Region 을 확인 후에 설정


9. Bridge 및 port 생성

$ sudo ovs-vsctl add-br br-ex

$ sudo ovs-vsctl add-port br-ex eth0

$ sudo ovs-vsctl add-br br-tun

$ sudo ovs-vsctl add-port br-tun eth2


10. init script 등록

$ sudo vi /etc/init/neutron-openvswitch-agent.conf


description "Neutron OpenVSwitch Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack


$ sudo service neutron-openvswitch-agent start


$ sudo vi /etc/init/neutron-l3-agent.conf


description "Neutron L3 Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


pre-start script

  # Check to see if openvswitch plugin in use by checking

  # status of cleanup upstart configuration

  if status neutron-ovs-cleanup; then

    start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent

  fi

end script


exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack


$ sudo service neutron-l3-agent start


$ sudo vi /etc/init/neutron-metadata-agent.conf


description "Neutron metadata Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack


$ sudo service neutron-metadata-agent start


11. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash


sudo service neutron-openvswitch-agent $1

sudo service neutron-metadata-agent $1

sudo service neutron-l3-agent $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart




External Network 생성

$ neutron net-create ext-net --router:external True --provider:physical_network external --provider:network_type flat


neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.75.193,end=192.168.75.254 --disable-dhcp --gateway 192.168.75.2 192.168.75.0/24


Internal Network 생성

neutron net-create demo-net --provider:network_type vxlan 

neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.1/24


Router 생성

$ neutron router-create demo-router

$ neutron router-interface-add demo-router demo-subnet

$ neutron router-gateway-set demo-router ext-net


Security rule 등록

$ neutron security-group-rule-create --protocol icmp --direction ingress default

$ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default


MTU 세팅

1. /etc/network/interfaces 파일에 세팅

auto eth2

iface eth2 inet static

address 192.168.200.152

netmask 255.255.255.0

mtu 9000


$ sudo ifdown eth2

$ sudo ifup eth2


2. 동적으로 세팅 (리부팅 필요)

$ ifconfig eth2 mtu 9000

$ reboot


route gw 등록

$ sudo route add -net "0.0.0.0/0" gw "10.0.0.1"


VM 생성

$ nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e


floating ip 추가

$ neutron floatingip-create ext-net

$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]


metadata 호출

cirros vm 안에서

$ wget http://169.254.169.254/latest/meta-data/instance-id


Controller 노드에서 metadata 바로 호출하기

$ curl \

  -H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \

  -H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \

  -H 'x-instance-id-signature: \

       80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \

  http://192.168.230.230:8775/latest/meta-data


[ x-instance-id-signature 구하기 ]

>>> import hmac

>>> import hashlib

>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()

'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'

>>>


코멘트 제외하고 설정정보 보기

cat /etc/nova/nova.conf  | grep -v ^# | grep -v ^$


vxlan 연결 여부 확인

$ sudo ovs-ofctl show br-tun


net 을 제거하는 순서

1. router 와 subnet 의 인터페이스 제거

$ neutron router-interface-delete [router-id] [subnet-id]


2. subnet 삭제

$ neutron subnet-delete [subnet-id]


3. net 삭제

$ neutron net-delete [net-id]


pip 으로 설치할 수 있게 배포판 만들기

$ sudo python setup.py sdist --formats=gztar

























반응형
Posted by seungkyua@gmail.com
,
반응형

1. Network A -> Network A

PREROUTING(nat:dnat) -> INPUT(filter) -> OUTPUT(nat:dnat) -> OUTPUT(filter->POSTROUTING(nat:snat)


2. Network A -> Network B

PREROUTING(nat:dnat) -> FORWARD(filter) -> POSTROUTING(nat:snat)


3. Nova Instance 생성 후 iptables nat

PREROUTING ACCEPT

    nova-network-PREROUTING

        -> VM DNAT 변환

    nova-compute-PREROUTING

    nova-api-metadat-PREROUTING

INPUT ACCEPT

OUTPUT ACCEPT

    nova-network-OUTPUT

        -> VM DNAT 변환

    nova-compute-OUTPUT

    nova-api-metadat-OUTPUT

POSTROUTING ACCEPT

    nova-network-POSTROUTING

    nova-compute-POSTROUTING

    nova-api-metadat-POSTROUTING

    nova-postrouting-bottom

        nova-network-snat

            nova-network-float-snat

                -> VM SNAT 변환

            

            -> Host SNAT 변환

        nova-compute-snat

            nova-compute-float-snat

        nova-api-metadat-snat

            nova-api-metadat-float-snat


4. Nova Instance 생성 후 iptables filter

INPUT ACCEPT

    nova-compute-INPUT

    nova-network-INPUT

        - dhcp 열기 (bridge 단위)

    nova-api-metadat-INPUT

        - nova metadata api 포트 8775 승인

FORWARD ACCEPT

    nova-filter-top

        nova-compute-local

            - nova-compute-inst-732 (인스턴스별 생성)

                nova-compute-provider

                - Secutiry rules 입력

                nova-compute-sg-fallback

                    - 모든 패킷 drop

        nova-network-local

        nova-api-metadat-local

    nova-compute-FORWARD

    nova-network-FORWARD

        - bridge 별 in/out 패킷 승인

    nova-api-metadat-FORWARD

OUTPUT ACCEPT

    nova-filter-top

        nova-compute-local

            - nova-compute-inst-732 (인스턴스별 생성)

                nova-compute-provider

                - Secutiry rules 입력

                nova-compute-sg-fallback

                    - 모든 패킷 drop

        nova-network-local

        nova-api-metadat-local

    nova-compute-OUTPUT

    nova-network-OUTPUT

    nova-api-metadat-OUTPUT





반응형
Posted by seungkyua@gmail.com
,
반응형

[ Controller Install ]


1. controller node install (nova, mysql, rabbitmq keystone, glance, cinder, horizon)

$ sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient


$ sudo apt-get install mysql-server-5.5


$ sudo apt-get install rabbitmq-server


$ sudo apt-get install keystone python-keystoneclient


$ sudo apt-get install glance python-glanceclient


$ sudo apt-get install cinder-api cinder-scheduler cinder-volume


$ apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard


2. database configuration (nova, glance, cinder, keystone)

$ sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

$ sudo vi /etc/mysql/my.cnf

[mysqld] 

# 추가

skip-host-cache 
skip-name-resolve 


$ sudo service mysql restart


$ mysql -u root -p

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhostIDENTIFIED BY 'NOVA_DBPASS';

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%IDENTIFIED BY 'NOVA_DBPASS';


mysql> CREATE DATABASE glance;

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhostIDENTIFIED BY 'GLANCE_DBPASS';

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%IDENTIFIED BY 'GLANCE_DBPASS';


mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhostIDENTIFIED BY 'CINDER_DBPASS';

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%IDENTIFIED BY 'CINDER_DBPASS';


mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \

           IDENTIFIED BY 'KEYSTONE_DBPASS';

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

           IDENTIFIED BY 'KEYSTONE_DBPASS';


sudo vi /etc/hosts.allow

ALL:192.168.0.0/255.255.0.0

mysqld:ALL


3. keystone setting

$ sudo rm /var/lib/keystone/keystone.db

$ sudo vi /etc/keystone/keystone.conf

connection = mysql://keystone:KEYSTONE_DBPASS@localhost/keystone

token_format = UUID


$ sudo keystone-manage db_sync

$ sudo service keystone restart


$ vi keystone_basic.sh

#!/bin/sh

#

# Keystone basic configuration 


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#

HOST_IP=192.168.75.131

ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin_pass}

SERVICE_PASSWORD=${SERVICE_PASSWORD:-service_pass}

export SERVICE_TOKEN="ADMIN"

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"

SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}


get_id () {

    echo `$@ | awk '/ id / { print $4 }'`

}


# Tenants

ADMIN_TENANT=$(get_id keystone tenant-create --name=admin)

SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME)



# Users

ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)



# Roles

ADMIN_ROLE=$(get_id keystone role-create --name=admin)

KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin)

KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin)


# Add Roles to Users in Tenants

keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT


# The Member role is used by Horizon and Swift

MEMBER_ROLE=$(get_id keystone role-create --name=Member)


# Configure service users/roles

NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE


GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE


QUANTUM_USER=$(get_id keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE


CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE


$ vi keystone_endpoints_basic.sh

#!/bin/sh

#

# Keystone basic Endpoints


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#


# Host address

HOST_IP=192.168.75.131

EXT_HOST_IP=192.168.75.131

VOLUME_HOST_IP=192.168.75.131

VOLUME_EXT_HOST_IP=192.168.75.131

NETWORK_HOST_IP=192.168.75.131

NETWORK_EXT_HOST_IP=192.168.75.131


# MySQL definitions

MYSQL_USER=keystone

MYSQL_DATABASE=keystone

MYSQL_HOST=$HOST_IP

MYSQL_PASSWORD=KEYSTONE_DBPASS


# Keystone definitions

KEYSTONE_REGION=RegionOne

export SERVICE_TOKEN=ADMIN

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"


while getopts "u:D:p:m:K:R:E:T:vh" opt; do

  case $opt in

    u)

      MYSQL_USER=$OPTARG

      ;;

    D)

      MYSQL_DATABASE=$OPTARG

      ;;

    p)

      MYSQL_PASSWORD=$OPTARG

      ;;

    m)

      MYSQL_HOST=$OPTARG

      ;;

    K)

      MASTER=$OPTARG

      ;;

    R)

      KEYSTONE_REGION=$OPTARG

      ;;

    E)

      export SERVICE_ENDPOINT=$OPTARG

      ;;

    T)

      export SERVICE_TOKEN=$OPTARG

      ;;

    v)

      set -x

      ;;

    h)

      cat <<EOF

Usage: $0 [-m mysql_hostname] [-u mysql_username] [-D mysql_database] [-p mysql_password]

       [-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url ] 

       [ -T keystone_token ]

          

Add -v for verbose mode, -h to display this message.

EOF

      exit 0

      ;;

    \?)

      echo "Unknown option -$OPTARG" >&2

      exit 1

      ;;

    :)

      echo "Option -$OPTARG requires an argument" >&2

      exit 1

      ;;

  esac

done  


if [ -z "$KEYSTONE_REGION" ]; then

  echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_TOKEN" ]; then

  echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_ENDPOINT" ]; then

  echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2

  missing_args="true"

fi


if [ -z "$MYSQL_PASSWORD" ]; then

  echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2

  missing_args="true"

fi


if [ -n "$missing_args" ]; then

  exit 1

fi

 

keystone service-create --name nova --type compute --description 'OpenStack Compute Service'

keystone service-create --name cinder --type volume --description 'OpenStack Volume Service'

keystone service-create --name glance --type image --description 'OpenStack Image Service'

keystone service-create --name keystone --type identity --description 'OpenStack Identity'

keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service'

keystone service-create --name quantum --type network --description 'OpenStack Networking service'


create_endpoint () {

  case $1 in

    compute)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s'

    ;;

    volume)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$VOLUME_EXT_HOST_IP"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s'

    ;;

    image)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':9292/v2' --adminurl 'http://'"$HOST_IP"':9292/v2' --internalurl 'http://'"$HOST_IP"':9292/v2'

    ;;

    identity)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':5000/v2.0' --adminurl 'http://'"$HOST_IP"':35357/v2.0' --internalurl 'http://'"$HOST_IP"':5000/v2.0'

    ;;

    ec2)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8773/services/Cloud' --adminurl 'http://'"$HOST_IP"':8773/services/Admin' --internalurl 'http://'"$HOST_IP"':8773/services/Cloud'

    ;;

    network)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$NETWORK_EXT_HOST_IP"':9696/' --adminurl 'http://'"$NETWORK_HOST_IP"':9696/' --internalurl 'http://'"$NETWORK_HOST_IP"':9696/'

    ;;

  esac

}


for i in compute volume image object-store identity ec2 network; do

  id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type='"$i"';"` || exit 1

  create_endpoint $i $id

done


$ vi admin.rc

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin_pass

export OS_AUTH_URL="http://192.168.75.131:5000/v2.0/"


$ keystone tenant-create --name DEV --enabled true

$ keystone user-create --name dev_admin --tenant 5e795212d0804ad89234d9a1ac30c8ca --pass adminPass --enabled true

$ keystone user-create --name dev_user01 --tenant 5e795212d0804ad89234d9a1ac30c8ca --pass userPass --enabled true


# Admin role 과 dev_admin 을 연결

$ keystone user-role-add --user c207c127ba7c46d2bf18f6c39ac4ff78 --role 19f87df854914a1a903972f70d7d631a --tenant 5e795212d0804ad89234d9a1ac30c8ca


# Member role 과 dev_user01 을 연결

keystone user-role-add --user 908c6c5691374d6a95b64fea0e1615ce --role b13ffb470d1040d298e08cf9f5a6003a --tenant 5e795212d0804ad89234d9a1ac30c8ca



$ vi dev_admin.rc

export OS_USERNAME=dev_admin

export OS_PASSWORD=adminPass

export OS_TENANT_NAME=DEV

export OS_AUTH_URL="http://192.168.75.131:5000/v2.0/"


$ vi dev_user.rc

export OS_USERNAME=dev_user01

export OS_PASSWORD=userPass

export OS_TENANT_NAME=DEV

export OS_AUTH_URL="http://192.168.75.131:5000/v2.0/"


4. nova settting

$ sudo vi /etc/nova/nova.conf


dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

logdir=/var/log/nova 

state_path=/var/lib/nova 

lock_path=/var/lock/nova 

force_dhcp_release=True 

libvirt_use_virtio_for_bridges=True 

connection_type=libvirt 

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf 

verbose=True 

debug=True 

ec2_private_dns_show_ip=True 

api_paste_config=/etc/nova/api-paste.ini 

enabled_apis=ec2,osapi_compute,metadata 

cinder_catalog_info=volume:cinder:adminURL

use_network_dns_servers=True

metadata_host=192.168.75.131

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_manager=nova.api.manager.MetadataManager

metadata_port=8775

vncserver_proxyclient_address=192.168.230.131

vncserver_listen=0.0.0.0

vnc_enabled=true

xvpvncproxy_base_url=http://192.168.230.131:6081/console

novncproxy_base_url=http://192.168.230.131:6080/vnc_auto.html

remove_unused_base_images=False

image_create_to_qcow2 = True

api_rate_limit=True


#rpc setting 

rpc_backend = rabbit 

rabbit_host = 192.168.230.131


#network setting 

network_api_class = nova.network.api.API 

security_group_api = nova


# Network settings 

dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

network_manager=nova.network.manager.VlanManager 

network_api_class=nova.network.api.API 

dhcp_lease_time=600 

vlan_start=1001 

fixed_range=10.0.0.0/16 

allow_same_net_traffic=False 

multi_host=True 

send_arp_for_ha=True 

#share_dhcp_address=True 

force_dhcp_release=True 

flat_interface = eth1

public_interface=eth0


#auth setting 

use_deprecated_auth = false

auth_strategy = keystone


#image setting 

glance_api_services = 192.168.75.131:9292 

image_service = nova.image.glance.GlanceImageService 

glance_host = 192.168.230.131


[database] 

connection = mysql://nova:NOVA_DBPASS@localhost/nova

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131 

auth_port = 35357

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin 

admin_password = admin_pass


$ sudo nova-manage db sync

$ sudo service nova-api restart

$ sudo service nova-cert restart

$ sudo service nova-consoleauth restart

$ sudo service nova-scheduler restart

$ sudo service nova-conductor restart

$ sudo service nova-novncproxy restart


5. glance setting

$ sudo vi /etc/glance/glance-api.conf


# 아래 코멘트 처리

qpid, swift_store, s3_store, sheepdog_store


rabbit_host = 192.168.230.131

rabbit_port = 5672 

rabbit_use_ssl = false 

rabbit_virtual_host = / 

rabbit_notification_exchange = glance

rabbit_notification_topic = notifications 

rabbit_durable_queues = False

 

[database]

connection = mysql://glance:GLANCE_DBPASS@192.168.230.131/glance

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131 

auth_port = 35357 

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin

admin_password = admin_pass


[paste_deploy]

flavor=keystone


$ sudo vi /etc/glance/glance-registry.conf


[database]

connection = mysql://glance:GLANCE_DBPASS@192.168.230.131/glance

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131

auth_port = 35357

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin

admin_password = admin_pass


[paste_deploy]

flavor=keystone


$ mysql -u root -p

mysql> use glance;

mysql> alter table migrate_version convert to character set utf8 collate utf8_unicode_ci;

mysql> flush privileges;


$ sudo glance-manage db_sync

$ sudo service glance-api restart

$ sudo service glance-registry restart


$ glance image-create --name ubuntu-14.04-cloudimg --disk-format qcow2 --container-format bare --owner e07a35f02d9e4281b8336d9112faed51 --file ubuntu-14.04-server-cloudimg-amd64-disk1.img --is-public True --progress


$ wget --no-check-certificate https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

$ glance image-create --name cirros-0.3.0 --disk-format qcow2 --container-format bare --owner e07a35f02d9e4281b8336d9112faed51 --file cirros-0.3.0-x86_64-disk.img --is-public True --progress


6. cinder setting

$ sudo cinder-manage db sync

$ sudo vi /etc/cinder/cinder.conf


[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = volume-sfpoc-%s

volume_group = cinder-volumes

verbose = True

debug=True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes


default_availability_zone=LH_ZONE

storage_availability_zone=LH_ZONE


rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = 192.168.75.131

rabbit_port = 5672


glance_host=192.168.230.131

glance_port=9292

glance_api_servers=$glance_host:$glance_port


default_volume_type=LOW_END


# multi backend

enabled_backends=LEFTHAND,SOLIDFIRE

[LEFTHAND]

volume_name_template = volume-sfpoc-%s

volume_group = cinder-volumes

volume_driver=cinder.volume.drivers.san.hp.hp_lefthand_iscsi.HPLeftHandISCSIDriver

volume_backend_name=ISCSI_LH

san_ip=192.168.230.141

san_login=admin

san_password=admin_pass

san_clustername=CLUSTER-LEFTHAND

san_ssh_port=16022


[SOLIDFIRE]

volume_name_template = volume-sfpoc-%s

volume_group = cinder-volumes

verbose = True

volume_driver=cinder.volume.drivers.solidfire.SolidFireDriver

volume_backend_name=ISCSI_SF

san_ip=192.168.230.151

san_login=admin

san_password=admin_pass



[database]

connection=mysql://cinder:cinderPass@192.168.75.131/cinder


[keystone_authtoken]

auth_uri = http://192.168.75.131:5000

auth_host = 192.168.75.131

auth_port = 35357

auth_protocol = http

admin_tenant_name = admin

admin_user = admin

admin_password = admin_pass


$ sudo cinder-manage db sync

$ sudo service cinder-api restart

$ sudo service cinder-volume restart

$ sudo service cinder-scheduler restart


7. LeftHand Cluster 정보 보기

$ ssh -p 16022 user@192.168.230.140

CLIQ> getclusterinfo searchdepth=1 verbose=0

CLIQ> getserverinfo servername=ubuntu

CLIQ> getvolumeinfo volumename=volume-sfpoc-9d36737a-d332-4613-bce2-32465904a6fc


8. multi backend 세팅

$ cinder type-create LOW_END

$ cinder type-key LOW_END set volume_backend_name=ISCSI_LH

$ cinder type-create HIGH_END

$ cinder type-key HIGH_END set volume_backend_name=ISCSI_SF


# 1G High-end 볼륨 생성

$ cinder create --display-name high-test-01 --volume-type HIGH_END 1


9. backend qos 세팅

$ cinder type-create IOPS_3000

$ cinder type-key IOPS_3000 set volume_backend_name=ISCSI_SF

$ cinder qos-create QOS_IOPS_3000 consumer="back-end" minIOPS=3000 maxIOPS=3000 burstIOPS=3000

$ cinder qos-associate 1e9694b8-eca4-4ce7-b476-d1637535aaa2 9c241c66-30fd-442b-b7a1-79b4f1892919

$ cinder qos-get-association 1e9694b8-eca4-4ce7-b476-d1637535aaa2



[ Compute Node Install ]


1. compute node install (nova-compute, nova-network, nova-api-metadata)

$ sudo apt-get install nova-compute-kvm nova-network nova-api-metadata





[ 기본 설정 ]


1. network setting

$ nova network-create --fixed-range-v4 10.0.0.0/24 --vlan 1001 --gateway 10.0.0.1 --bridge br1001 --bridge-interface eth0 --multi-host T --dns1 8.8.8.8 --dns2 8.8.4.4 --project-id 5e795212d0804ad89234d9a1ac30c8ca dev_network


2. fixed ip reserve

$ nova fixed-ip-reserve 10.0.0.3

$ nova fixed-ip-reserve 10.0.0.4

$ nova fixed-ip-reserve 10.0.0.5


3. floating ip create

$ nova floating-ip-bulk-create 192.168.75.128/25 --interface eth0


4. secgroup 생성

$ nova secgroup-create connect 'icmp and ssh'

$ nova secgroup-add-rule connect icmp -1 -1 0.0.0.0/0

$ nova secgroup-add-rule connect tcp 22 22 0.0.0.0/0


5. keypair 생성

$ nova keypair-add stephen >> stephen.pem


6. pem 파일을 다른 호스트에 복사

$ scp -P 22 dev_admin.pem stack@192.168.230.132:~/creds/.

$ chmod 600 dev_admin.pem


7. nova.conf 를 다른 멀티호스트에 복사

$ for i in `seq 132 134`; do scp nova.conf stack@192.168.230.$i:~/creds/.; done


8. zone 설정

$ nova aggregate-create POC LH_ZONE

$ nova aggregate-add-host POC ubuntu


9. VM 생성

$ nova boot test01 --flavor 1 --image 4399bba0-17a4-43ef-8fdd-4edd9c2afe74 --key_name dev_admin --security_group connect


# boot on volume 및 attach volume 을 동시에 실행

$ nova boot [name] --flavor [flavorid] 

  --block-device id=[imageid],source=image,dest=volume,size=10,bootindex=0,shutdown=remove

  --block-device id=[volumeid],source=volume,dest=volume,size=100,bootindex=1


10. VM 접속

$ ssh -i dev_admin.pem cirros@10.0.0.6

$ ssh -i dev_admin.pem ubuntu@10.0.0.6




[ VMware 관련 설정 ]


1. cinder.conf

[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = %s

volume_group = cinder-volumes

verbose = True

debug=True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes


default_availability_zone=VMWARE_ZONE

storage_availability_zone=VMWARE_ZONE


rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = 192.168.75.131

rabbit_port = 5672


glance_host=192.168.75.131

glance_port=9292

glance_api_servers=$glance_host:$glance_port


default_volume_type=VMWARE_TYPE


# multi backend

enabled_backends=VMWARE_DRIVER


[VMWARE_DRIVER]

volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareEsxVmdkDriver

volume_backend_name=VMWARE

vmware_host_ip = 192.168.75.131

vmware_host_password = VMWARE_PASSWORD

vmware_host_username = root


[database]

connection=mysql://cinder:cinderPass@192.168.75.131/cinder


[keystone_authtoken]

auth_uri = http://192.168.75.131:5000

auth_host = 192.168.75.131

auth_port = 35357

auth_protocol = http

admin_tenant_name = admin

admin_user = admin

admin_password = admin_pass


2. multi backend 세팅

$ cinder type-create VMWARE_TYPE

$ cinder type-key VMWARE_TYPE set volume_backend_name=VMWARE


# 1G High-end 볼륨 생성

$ cinder create --display-name test-01 --volume-type VMWARE_TYPE 1


3. nova.conf 

$ sudo vi /etc/nova/nova.conf


dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

logdir=/var/log/nova 

state_path=/var/lib/nova 

lock_path=/var/lock/nova 

force_dhcp_release=True 

# libvirt_use_virtio_for_bridges=True 

# connection_type=libvirt 

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf 

verbose=True 

debug=True 

ec2_private_dns_show_ip=True 

api_paste_config=/etc/nova/api-paste.ini 

enabled_apis=ec2,osapi_compute,metadata 

cinder_catalog_info=volume:cinder:adminURL

use_network_dns_servers=True

metadata_host=192.168.75.131

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_manager=nova.api.manager.MetadataManager

metadata_port=8775

vncserver_proxyclient_address=192.168.230.131

vncserver_listen=0.0.0.0

vnc_enabled=true

xvpvncproxy_base_url=http://192.168.230.131:6081/console

novncproxy_base_url=http://192.168.230.131:6080/vnc_auto.html

compute_driver = vmwareapi.VMwareVCDriver

remove_unused_base_images=False

image_create_to_qcow2 = True

api_rate_limit=True


#rpc setting 

rpc_backend = rabbit 

rabbit_host = 192.168.230.131


#network setting 

network_api_class = nova.network.api.API 

security_group_api = nova


# Network settings 

dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

network_manager=nova.network.manager.VlanManager 

network_api_class=nova.network.api.API 

dhcp_lease_time=600 

vlan_start=1001 

fixed_range=10.0.0.0/16 

allow_same_net_traffic=False 

multi_host=True 

send_arp_for_ha=True 

#share_dhcp_address=True 

force_dhcp_release=True 

flat_interface = eth0

public_interface=eth0


#auth setting 

use_deprecated_auth = false

auth_strategy = keystone


#image setting 

glance_api_services = 192.168.75.131:9292 

image_service = nova.image.glance.GlanceImageService 

glance_host = 192.168.230.131


[vmware]

host_ip = 192.168.75.131

host_username = root

host_password = VMWARE_PASSWORD

cluster_name = cluster1

use_linked_clone = False


[database] 

connection = mysql://nova:NOVA_DBPASS@localhost/nova

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131 

auth_port = 35357

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin 

admin_password = admin_pass


4. nova-compute.conf

#[DEFAULT]

#compute_driver=libvirt.LibvirtDriver

#[libvirt]

#virt_type=kvm


5. zone 설정

$ nova aggregate-create VMWARE VMWARE_ZONE

$ nova aggregate-add-host VMWARE controller


6. image 등록

[ slitaz linux ]

wget http://partnerweb.vmware.com/programs/vmdkimage/trend-tinyvm1-flat.vmdk

$ glance image-create --name [vmware]trend-static-thin --file trend-tinyvm1-flat.vmdk --is-public=True --container-format=bare --disk-format=vmdk --property vmware_disktype="thin" --property vmware_adaptertype="ide"


[ slitaz linux 접속 및 dhcp 변경]

vmware / vmware  접속 후 root 권한 획득   root / root


# vi /etc/network.conf

DHCP="yes"

STATIC="no"


[ cirros ]

wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

$ qemu-img convert -f qcow2 -O vmdk cirros-0.3.3-x86_64-disk.img cirros-0.3.3-x86_64-disk.vmdk

$ glance image-create --name [vmware]cirros-0.3.3 --disk-format vmdk --container-format bare --file cirros-0.3.3-x86_64-disk.vmdk --property vmware-disktype="sparse" --property hw_vif_model="VirtualVmxnet" --property vmware_adaptertype="ide" --is-public True --progress


7. vm -> image 저장

1. ESXi 호스트 접속

2. vm위치로 이동

# cd /vmfs/volumes/datastore1/6c516279-c83f-43ec-a8d4-bec540604280

3. thin copy

# vmkfstools -i 6c516279-c83f-43ec-a8d4-bec540604280.vmdk -d thin .

./vmware_temp/trend-tinyvm1-dhcp-thin.vmdk

4. 다른 host 에서 scp 로 가져옴

$ scp root@192.168.75.182:/vmfs/volumes/542cf526-bef9f829-2f02-000c29fef6ec/vmware_temp/trend-tinyvm1-dhcp-thin-flat.vmdk .


8. nova boot

$ nova hypervisor-list

$ nova boot test01 --flavor 1 --image 6d9745dc-0fc9-4802-b21d-329004353406 --key_name stephen --availability-zone "VMWARE_ZONE::domain-c12(cluster1)"










반응형
Posted by seungkyua@gmail.com
,
반응형

1. compute host 간의 libvirt 버전이 동일해야 한다.

2. "libvirtd -d -l" 옵션으로 떠 있어야 한다.


# vi /etc/libvirt/libvirtd.conf

listen_tls = 0

listen_tcp = 1

auth_tcp = "none"


# vi /etc/init/libvirt-bin.conf

env libvirtd_opts="-d -l"


# vi /etc/default/libvirt-bin

libvirtd_opts=" -d -l"


sudo service libvirt-bin restart


3. nova.conf 의 "send_arp_for_ha" flag가 True로 셋팅되어야 함


# vi /etc/nova/nova.conf

send_arp_for_ha=True

#force_config_drive = always

block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE


반응형
Posted by seungkyua@gmail.com
,
반응형

1. Linux 의 경우 커널에 포함되어 있는 kvm-clock 을 사용하여 Host 머신과 동기화


2. Windows 의 경우 kvm-clock 이 제공되지 않으므로 다음 두가지를 활용하여 동기화

     - RTC (Real Time Clock)

        bcdedit /set {default} USEPLATFORMCLOCK on


        <clock offset='localtime'>

            <timer name='rtc' tickpolicy='catchup' track='guest'/>
            <timer name='pit' tickpolicy='delay'/>
            <timer name='hpet' present='no'/>
         </clock>


     - TSC(Time Stamp Counter)


반응형
Posted by seungkyua@gmail.com
,
반응형

DevOn 2013 에서 OpenStack 에 관한 발표자료입니다.


OpenStack-Overview.pdf



반응형
Posted by seungkyua@gmail.com
,
반응형

[ Mac vmware 에 설치한 Ubuntu 에 vt-x 활성화하기 위해 vmx 파일 수정]

vhv.enable = "TRUE"


[ ssh server 설치 ]

sudo apt-get install -y openssh-server


[ 구조 설명 ]

Cloud Controller

    - hostname : controller

    - eth0 : 192.168.75.131

    - eth1 : 192.168.230.131

    - 설치 모듈 : mysql, rabbitMQ, keystone, glance, nova-api,

                       cinder-api, cinder-scheduler, cinder-volume, open-iscsi, iscsitarget

                       quantum-server

Network

    - hostname : network

    - eth0 : 192.168.75.132

    - eth1 : 192.168.230.132

    - eth2 : 

    - eth3 : 192.168.75.133

    - 설치 모듈 : openvswitch-switch openvswitch-datapath-dkms

                       quantum-plugin-openvswitch-agent dnsmasq quantum-dhcp-agent quantum-l3-agent

Compute

    - hostname : compute

    - eth0 : 192.168.75.134

    - eth1 : 192.168.230.134

    - eth2 : 

    - 설치 모듈 : openvswitch-switch openvswitch-datapath-dkms 

                       quantum-plugin-openvswitch-agent, nova-compute-kvm, open-iscsi, iscsitarget


[ network 설정 ]

eth0 : public 망 (NAT)                          192.168.75.0/24

eth1 : private host 망 Custom(VMnet2)  192.168.230.0/24

eth2 : vm private 망                             10.0.0.0/24

eth3 : vm Quantum public 망(NAT)        192.168.75.0/26


[ hostname 변경 ]

vi /etc/hosts

192.168.230.131 controller

192.168.230.132 network

192.168.230.134 compute


vi /etc/hostname

   controller


hostname -F /etc/hostname

새로운 터미널로 확인


[ eth0 eth1 설정 ]

vi /etc/network/interfaces


# The loopback network interface

auto lo

iface lo inet loopback


# Host Public 망

auto eth0

iface eth0 inet static

      address 192.168.75.131

      netmask 255.255.255.0

      gateway 192.168.75.2

      dns-nameservers 8.8.8.8 8.8.4.4


# Host Private 망

auto eth1

iface eth1 inet static

      address 192.168.230.131

      netmask 255.255.255.0


service networking restart


[ vmware 에 설치한 Ubuntu 에서 가상화를 지원하는지 확인 ]

egrep '(vmx|svm)' --color=always /proc/cpuinfo


[ nova 설치 매뉴얼 ]

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst


[ nova 소스 위치 ]

nova link source = /usr/lib/python2.7/dist-packages/nova

nova original source = /usr/share/pyshared/nova


##################   모든 node 공통 설치하기   #####################


[ root 패스워드 세팅 ]

sudo su -

passwd


[ repository upgrade ]

apt-get install -y ubuntu-cloud-keyring python-software-properties software-properties-common python-keyring


echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list


apt-get update

apt-get upgrade

apt-get dist-upgrade


[ screen vim 설치 ]

sudo apt-get install -y screen vim


[ .screenrc ]

vbell off

autodetach on

startup_message off

defscrollback 1000

attrcolor b ".I"

termcap xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm'

defbce "on"

#term screen-256color


## apps I want to auto-launch

#screen -t irssi irssi

#screen -t mutt mutt


## statusline, customized. (should be one-line)

hardstatus alwayslastline '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}[%{W}%n%f %t%?(%u)%?%{=b kR}]%{= kw}%?%+Lw%?%?%= %{g}][%{Y}%l%{g}]%{=b C}[ %D %m/%d %C%a ]%{W}'


[ .vimrc ]

syntax on

set nocompatible

set number

set backspace=indent,eol,start

set tabstop=4

set shiftwidth=4

set autoindent

set visualbell

set laststatus=2

set statusline=%h%F%m%r%=[%l:%c(%p%%)]

set hlsearch

set background=dark

set expandtab

set tags=./tags,./TAGS,tags,TAGS,/usr/share/pyshared/nova/tags

set et

" Removes trailing spaces
function! TrimWhiteSpace()
    %s/\s\+$//e
endfunction

nnoremap <silent> <Leader>rts :call TrimWhiteSpace()<CR>
autocmd FileWritePre    * :call TrimWhiteSpace()
autocmd FileAppendPre   * :call TrimWhiteSpace()
autocmd FilterWritePre  * :call TrimWhiteSpace()
autocmd BufWritePre     * :call TrimWhiteSpace()


[ remove dmidecode ]

apt-get purge dmidecode

apt-get autoremove

kill -9 [dmidecode process]


[ root 일 때 nova 계정이 없을 경우 유저 및 권한 설정 ]

adduser nova


visudo

   nova     ALL=(ALL:ALL) NOPASSWD:ALL


[ ntp 설치 ]

apt-get install -y ntp


vi /etc/ntp.conf

#server 0.ubuntu.pool.ntp.org

#server 1.ubuntu.pool.ntp.org

#server 2.ubuntu.pool.ntp.org

#server 3.ubuntu.pool.ntp.org

server time.bora.net


service ntp restart


# 한국 시간 세팅 및 최초 시간 맞추기

ntpdate -u time.bora.net

ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime


[ mysql client 설치 ]

apt-get install -y python-mysqldb mysql-client-5.5


[ KVM 설치 및 확인 ]

apt-get install -y cpu-checker

apt-get install -y kvm libvirt-bin pm-utils

kvm-ok


# kvm 이 load 되어 있는지 확인하기

lsmod | grep kvm


# 서버 reboot 시에 kvm 자동 load 추가

vi /etc/modules

   kvm

   kvm_intel


vi /etc/libvirt/qemu.conf

   cgroup_device_acl = [

   "/dev/null", "/dev/full", "/dev/zero",

   "/dev/random", "/dev/urandom",

   "/dev/ptmx", "/dev/kvm", "/dev/kqemu",

   "/dev/rtc", "/dev/hpet","/dev/net/tun"

   ]


# delete default virtual bridge

virsh net-destroy default

virsh net-undefine default


# enable live migration

vi /etc/libvirt/libvirtd.conf

   listen_tls = 0

   listen_tcp = 1

   auth_tcp = "none"


vi /etc/init/libvirt-bin.conf

   env libvirtd_opts="-d -l"


vi /etc/default/libvirt-bin

   libvirtd_opts="-d -l"


service dbus restart

service libvirt-bin restart


[ bridge 설치 ]

apt-get install -y vlan bridge-utils


[ IP_Forwarding 설정 ]

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

sysctl net.ipv4.ip_forward=1


##################   Cloud Controller 설치하기   #####################


[ ntp 세팅 ]

vi /etc/ntp.conf

   server time.bora.net

service ntp restart


[ network 세팅 ]

vi /etc/network/interfaces


# The loopback network interface

auto lo

iface lo inet loopback


# Host Public 망

auto eth0

iface eth0 inet static

      address 192.168.75.131

      netmask 255.255.255.0

      gateway 192.168.75.2

      dns-nameservers 8.8.8.8 8.8.4.4


# Host Private 망

auto eth1

iface eth1 inet static

      address 192.168.230.131

      netmask 255.255.255.0


service networking restart


[ hostname 변경 ]

vi /etc/hosts

192.168.230.131 controller

192.168.230.132 network

192.168.230.134 compute


vi /etc/hostname

   controller


hostname -F /etc/hostname


[ mysql db 설치 ]

apt-get install -y python-mysqldb mysql-server                 password : 임시 패스워드

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

service mysql restart


[ rabbitmq server install ]

apt-get install -y rabbitmq-server


# user 변환

sudo su - nova


[ Database 세팅 ]

mysql -u root -p

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE quantum;

GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' IDENTIFIED BY '임시 패스워드';


# grant 가 안될 때

use mysql;


UPDATE user SET

Select_priv = 'Y',

Insert_priv = 'Y',

Update_priv = 'Y',

Delete_priv = 'Y',

Create_priv = 'Y',

Drop_priv = 'Y',

Reload_priv = 'Y',

Shutdown_priv = 'Y',

Process_priv = 'Y',

File_priv = 'Y',

Grant_priv = 'Y',

References_priv = 'Y',

Index_priv = 'Y',

Alter_priv = 'Y',

Show_db_priv = 'Y',

Super_priv = 'Y',

Create_tmp_table_priv = 'Y',

Lock_tables_priv = 'Y',

Execute_priv = 'Y',

Repl_slave_priv = 'Y',

Repl_client_priv = 'Y',

Create_view_priv = 'Y',

Show_view_priv = 'Y',

Create_routine_priv = 'Y',

Alter_routine_priv = 'Y',

Create_user_priv = 'Y',

Event_priv = 'Y',

Trigger_priv = 'Y',

Create_tablespace_priv = 'Y'

WHERE user IN ('keystone', 'glance', 'nova', 'quantum', 'cinder');


[ keystone 설치 ]

sudo apt-get install -y keystone

sudo service keystone status

sudo rm /var/lib/keystone/keystone.db


sudo vi /etc/keystone/keystone.conf

connection = mysql://keystone:임시 패스워드@controller/keystone

token_format = UUID


sudo service keystone restart

sudo keystone-manage db_sync


[ keystone 세팅 ]

vi keystone_basic.sh

#!/bin/sh

#

# Keystone basic configuration 


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#

HOST_IP=192.168.230.131

ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin_pass}

SERVICE_PASSWORD=${SERVICE_PASSWORD:-service_pass}

export SERVICE_TOKEN="ADMIN"

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"

SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}


get_id () {

    echo `$@ | awk '/ id / { print $4 }'`

}


# Tenants

ADMIN_TENANT=$(get_id keystone tenant-create --name=admin)

SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME)



# Users

ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)



# Roles

ADMIN_ROLE=$(get_id keystone role-create --name=admin)

KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin)

KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin)


# Add Roles to Users in Tenants

keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT


# The Member role is used by Horizon and Swift

MEMBER_ROLE=$(get_id keystone role-create --name=Member)


# Configure service users/roles

NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE


GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE


QUANTUM_USER=$(get_id keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE


CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE


vi keystone_endpoints_basic.sh

#!/bin/sh

#

# Keystone basic Endpoints


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#


# Host address

HOST_IP=192.168.230.131

EXT_HOST_IP=192.168.75.131

VOLUME_HOST_IP=192.168.230.131

VOLUME_EXT_HOST_IP=192.168.75.131

NETWORK_HOST_IP=192.168.230.132

NETWORK_EXT_HOST_IP=192.168.75.133


# MySQL definitions

MYSQL_USER=keystone

MYSQL_DATABASE=keystone

MYSQL_HOST=$HOST_IP

MYSQL_PASSWORD=임시 패스워드


# Keystone definitions

KEYSTONE_REGION=RegionOne

export SERVICE_TOKEN=ADMIN

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"


while getopts "u:D:p:m:K:R:E:T:vh" opt; do

  case $opt in

    u)

      MYSQL_USER=$OPTARG

      ;;

    D)

      MYSQL_DATABASE=$OPTARG

      ;;

    p)

      MYSQL_PASSWORD=$OPTARG

      ;;

    m)

      MYSQL_HOST=$OPTARG

      ;;

    K)

      MASTER=$OPTARG

      ;;

    R)

      KEYSTONE_REGION=$OPTARG

      ;;

    E)

      export SERVICE_ENDPOINT=$OPTARG

      ;;

    T)

      export SERVICE_TOKEN=$OPTARG

      ;;

    v)

      set -x

      ;;

    h)

      cat <<EOF

Usage: $0 [-m mysql_hostname] [-u mysql_username] [-D mysql_database] [-p mysql_password]

       [-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url ] 

       [ -T keystone_token ]

          

Add -v for verbose mode, -h to display this message.

EOF

      exit 0

      ;;

    \?)

      echo "Unknown option -$OPTARG" >&2

      exit 1

      ;;

    :)

      echo "Option -$OPTARG requires an argument" >&2

      exit 1

      ;;

  esac

done  


if [ -z "$KEYSTONE_REGION" ]; then

  echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_TOKEN" ]; then

  echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_ENDPOINT" ]; then

  echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2

  missing_args="true"

fi


if [ -z "$MYSQL_PASSWORD" ]; then

  echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2

  missing_args="true"

fi


if [ -n "$missing_args" ]; then

  exit 1

fi

 

keystone service-create --name nova --type compute --description 'OpenStack Compute Service'

keystone service-create --name cinder --type volume --description 'OpenStack Volume Service'

keystone service-create --name glance --type image --description 'OpenStack Image Service'

keystone service-create --name keystone --type identity --description 'OpenStack Identity'

keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service'

keystone service-create --name quantum --type network --description 'OpenStack Networking service'


create_endpoint () {

  case $1 in

    compute)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s'

    ;;

    volume)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$VOLUME_EXT_HOST_IP"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s'

    ;;

    image)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':9292/v2' --adminurl 'http://'"$HOST_IP"':9292/v2' --internalurl 'http://'"$HOST_IP"':9292/v2'

    ;;

    identity)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':5000/v2.0' --adminurl 'http://'"$HOST_IP"':35357/v2.0' --internalurl 'http://'"$HOST_IP"':5000/v2.0'

    ;;

    ec2)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8773/services/Cloud' --adminurl 'http://'"$HOST_IP"':8773/services/Admin' --internalurl 'http://'"$HOST_IP"':8773/services/Cloud'

    ;;

    network)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$NETWORK_EXT_HOST_IP"':9696/' --adminurl 'http://'"$NETWORK_HOST_IP"':9696/' --internalurl 'http://'"$NETWORK_HOST_IP"':9696/'

    ;;

  esac

}


for i in compute volume image object-store identity ec2 network; do

  id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type='"$i"';"` || exit 1

  create_endpoint $i $id

done


# keystone 접근 어드민 

vi creds

unset http_proxy

unset https_proxy

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin_pass

export OS_AUTH_URL="http://controller:5000/v2.0/"


source creds

keystone user-list


[ Glance 설치 ]

sudo apt-get install -y glance

sudo rm /var/lib/glance/glance.sqlite

sudo service glance-api status

sudo service glance-registry status


sudo vi /etc/glance/glance-api-paste.ini

[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

delay_auth_decision = true

auth_host = 192.168.230.141

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = service_pass


sudo vi /etc/glance/glance-registry-paste.ini

[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

auth_host = 192.168.230.141

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = service_pass


sudo vi /etc/glance/glance-api.conf

sql_connection = mysql://glance:임시 패스워드@192.168.230.141/glance

enable_v1_api = True

enable_v2_api = True


[paste_deploy]

flavor=keystone


sudo vi /etc/glance/glance-registry.conf

sql_connection = mysql://glance:임시 패스워드@192.168.230.141/glance


[paste_deploy]

flavor=keystone


sudo glance-manage db_sync

sudo service glance-registry restart

sudo service glance-api restart


[ Image 등록 ]

mkdir images

cd images

wget --no-check-certificate https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

glance image-create --name cirros --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.0-x86_64-disk.img

glance image-list


[ Nova-api, scheduler 설치 ]

sudo apt-get install -y nova-api nova-scheduler nova-cert novnc nova-consoleauth nova-novncproxy nova-doc nova-conductor


mysql -uroot -p임시 패스워드 -e 'CREATE DATABASE nova;'

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '임시 패스워드';"

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '임시 패스워드';"


sudo vi /etc/nova/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = nova

   admin_password = service_pass

   signing_dir = /tmp/keystone-signing-nova

   # Workaround for https://bugs.launchpad.net/nova/+bug/1154809

   auth_version = v2.0



sudo vi /etc/nova/nova.conf


[DEFAULT]

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/run/lock/nova

verbose=True

api_paste_config=/etc/nova/api-paste.ini

compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler

rabbit_host=192.168.230.141

nova_url=http://192.168.230.141:8774/v1.1/

sql_connection=mysql://nova:imsi00@192.168.230.141/nova

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf


# Auth

use_deprecated_auth=false

auth_strategy=keystone


# Imaging service

glance_api_servers=192.168.230.141:9292

image_service=nova.image.glance.GlanceImageService


# Vnc configuration

novnc_enabled=true

novncproxy_base_url=http://192.168.75.141:6080/vnc_auto.html

novncproxy_port=6080

vncserver_proxyclient_address=192.168.230.141

vncserver_listen=0.0.0.0


# Network settings

network_api_class=nova.network.quantumv2.api.API

quantum_url=http://192.168.230.143:9696

quantum_auth_strategy=keystone

quantum_admin_tenant_name=service

quantum_admin_username=quantum

quantum_admin_password=service_pass

quantum_admin_auth_url=http://192.168.230.141:35357/v2.0

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver


#Metadata

service_quantum_metadata_proxy = True

quantum_metadata_proxy_shared_secret = helloOpenStack

metadata_host = 192.168.230.141

metadata_listen = 127.0.0.1

metadata_listen_port = 8775


# Compute #

compute_driver=libvirt.LibvirtDriver


# Cinder #

volume_api_class=nova.volume.cinder.API

osapi_volume_listen_port=5900


sudo nova-manage db sync


# restart nova services

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done


# check nova services

nova-manage service list


[ Horizon 설치 ]

sudo apt-get install -y openstack-dashboard memcached


# ubuntu 테마 삭제

sudo apt-get purge openstack-dashboard-ubuntu-them


# apache and mecached reload

sudo service apache2 restart

sudo service memcached restart


# browser 접속 url

http://192.168.75.141/horizon/


##################   Cinder 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ network 세팅 ]


[ hostname 변경 ]


[ Cinder  설치 ]

sudo apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms

sudo sed -i 's/false/true/g' /etc/default/iscsitarget

sudo vi /etc/iscsi/iscsid.conf

   node.startup = automatic

sudo service iscsitarget start

sudo service open-iscsi start


mysql -uroot -p임시 패스워드 -e 'CREATE DATABASE cinder;'

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '임시 패스워드';"

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '임시 패스워드';"


sudo vi /etc/cinder/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystone.middleware.auth_token:filter_factory

   service_protocol = http

   service_host = 192.168.75.141

   service_port = 5000

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = cinder

   admin_password = service_pass


sudo vi /etc/cinder/cinder.conf

   [DEFAULT]

   rootwrap_config=/etc/cinder/rootwrap.conf

   sql_connection = mysql://cinder:임시 패스워드@192.168.230.141/cinder

   api_paste_config = /etc/cinder/api-paste.ini

   iscsi_helper=ietadm

   volume_name_template = volume-%s

   volume_group = cinder-volumes

   verbose = True

   auth_strategy = keystone

   rabbit_host = 192.168.230.141


sudo cinder-manage db sync


[ cinder volume 생성 ]

dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=10G

sudo losetup /dev/loop2 cinder-volumes

sudo fdisk /dev/loop2


1. sudo fdisk -l

2. sudo fdisk /dev/sdb

3. Press ‘n' to create a new disk partition,

4. Press 'p' to create a primary disk partition,

5. Press '1' to denote it as 1st disk partition,

6. Either press ENTER twice to accept the default of 1st and last cylinder – to convert the remainder of hard disk to a single disk partition

   -OR- press ENTER once to accept the default of the 1st, and then choose how big you want the partition to be by specifying +size{K,M,G} 

   e.g. +5G or +6700M.

7. Press 't', then select the new partition you made.

8. Press '8e' change your new partition to 8e, i.e. Linux LVM partition type.

9. Press ‘p' to display the hard disk partition setup. Please take note that the first partition is denoted as /dev/sda1 in Linux.

10. Press 'w' to write the partition table and exit fdisk upon completion.


sudo pvcreate /dev/loop2

sudo vgcreate cinder-volumes /dev/loop2


# 서버 reboot 시에도 자동으로 설정

sudo vi /etc/init.d/cinder-setup-backing-file

losetup /dev/loop2 /home/nova/cinder-volumes

exit 0


sudo chmod 755 /etc/init.d/cinder-setup-backing-file

sudo ln -s /etc/init.d/cinder-setup-backing-file /etc/rc2.d/S10cinder-setup-backing-file


# restart cinder services

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done


# verify cinder services

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done



##################   Quantum Server 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ network 세팅 ]


[ hostname 변경 ]


[ quantum server 설치 ]

sudo apt-get install -y quantum-server

sudo rm -rf /var/lib/quantum/ovs.sqlite


mysql -uroot -p임시 패스워드 -e 'CREATE DATABASE quantum;'

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'%' IDENTIFIED BY '임시 패스워드';"

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY '임시 패스워드';"


sudo vi /etc/quantum/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystone.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


sudo vi /etc/quantum/quantum.conf

   rabbit_host = 192.168.230.141


sudo service quantum-server restart

sudo service quantum-server status


##################   Quantum Network 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ eth2 vm 용 public 망 추가 - Quantum public network 로 사용 ]

sudo vi /etc/network/interfaces


auto lo

iface lo inet loopback


# host public 망

auto eth0

iface eth0 inet static

      address 192.168.75.144

      netmask 255.255.255.0

      gateway 192.168.75.2

      dns-nameservers 8.8.8.8, 8.8.4.4


# vm private 망, host private 망

auto eth1

iface eth1 inet static

      address 192.168.230.144

      netmask 255.255.255.0


# vm public 망

auto eth2

iface eth2 inet manual

      up ifconfig $IFACE 0.0.0.0 up

      up ip link set $IFACE promisc on

      down ip link set $IFACE promisc off

      down ifconfig $IFACE down


sudo service networking restart


[ hostname 변경 ]


[ openVSwitch 설치 ]

sudo apt-get install -y openvswitch-switch openvswitch-datapath-dkms


# bridge 생성

sudo ovs-vsctl add-br br-int

sudo ovs-vsctl add-br br-ex


[ Quantum openVSwitch agent, dnsmasq, dhcp agent, L3 agent, metadata agent 설치 ]

sudo apt-get install -y quantum-plugin-openvswitch-agent dnsmasq quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent


sudo vi /etc/quantum/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


sudo vi /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

   [DATABASE]

   sql_connection = mysql://quantum:임시 패스워드@192.168.230.141/quantum


   [OVS]

   tenant_network_type = gre

   enable_tunneling = True

   tunnel_id_ranges = 1:1000

   integration_bridge = br-int

   tunnel_bridge = br-tun

   local_ip = 192.168.230.144


sudo vi /etc/quantum/l3_agent.ini

   # 맨 아랫줄에 추가

   auth_url = http://192.168.230.141:35357/v2.0

   auth_region = RegionOne

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


sudo vi /etc/quantum/metadata_agent.ini

   auth_url = http://192.168.230.141:35357/v2.0

   auth_region = RegionOne

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


   nova_metadata_ip = 192.168.230.141

   nova_metadata_port = 8775

   metadata_proxy_shared_secret = helloOpenStack


sudo vi /etc/quantum/quantum.conf

   rabbit_host = 192.168.230.141


# restart Quantum services

cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done


# br-ex 와 public 망과 연결

sudo ovs-vsctl add-port br-ex eth2



##################   Compute 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ network 세팅 ]


[ hostname 변경 ]


[ openVSwitch 설치 ]

sudo apt-get install -y openvswitch-switch openvswitch-datapath-dkms


# bridge 생성

sudo ovs-vsctl add-br br-int


[ Quantum openVSwitch agent 설치 ]

sudo apt-get install -y quantum-plugin-openvswitch-agent


sudo vi /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

   [DATABASE]

   sql_connection = mysql://quantum:imsi00@192.168.230.141/quantum


   [OVS]

   tenant_network_type = gre

   enable_tunneling = True

   tunnel_id_ranges = 1:1000

   integration_bridge = br-int

   tunnel_bridge = br-tun

   local_ip = 192.168.230.145


sudo vi /etc/quantum/quantum.conf

   rabbit_host = 192.168.230.141


   [keystone_authtoken]  ----> ? 필요한 세팅인가?

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass

   signing_dir = /var/lib/quantum/keystone-signing


# quantum openVSwitch agent restart

sudo service quantum-plugin-openvswitch-agent restart


[ Nova  설치 ]

sudo apt-get install -y nova-compute-kvm open-iscsi


sudo vi /etc/nova/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystone.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = nova

   admin_password = service_pass

   signing_dir = /tmp/keystone-signing-nova

   # Workaround for https://bugs.launchpad.net/nova/+bug/1154809

   auth_version = v2.0


sudo vi /etc/nova/nova-compute.conf

   [DEFAULT]

   libvirt_type=kvm

   libvirt_ovs_bridge=br-int

   libvirt_vif_type=ethernet

   libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

   libvirt_use_virtio_for_bridges=True


sudo vi /etc/nova/nova.conf

   [DEFAULT]

   logdir=/var/log/nova

   state_path=/var/lib/nova

   lock_path=/run/lock/nova

   verbose=True

   api_paste_config=/etc/nova/api-paste.ini

   compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler

   rabbit_host=192.168.230.141

   nova_url=http://192.168.230.141:8774/v1.1/

   sql_connection=mysql://nova:imsi00@192.168.230.141/nova

   root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf


   # Auth

   use_deprecated_auth=false

   auth_strategy=keystone


   # Imaging service

   glance_api_servers=192.168.230.141:9292

   image_service=nova.image.glance.GlanceImageService


   # Vnc configuration

   novnc_enabled=true

   novncproxy_base_url=http://192.168.75.141:6080/vnc_auto.html

   novncproxy_port=6080

   vncserver_proxyclient_address=192.168.230.141

   vncserver_listen=0.0.0.0


   # Network settings

   network_api_class=nova.network.quantumv2.api.API

   quantum_url=http://192.168.230.141:9696

   quantum_auth_strategy=keystone

   quantum_admin_tenant_name=service

   quantum_admin_username=quantum

   quantum_admin_password=service_pass

   quantum_admin_auth_url=http://192.168.230.141:35357/v2.0

   libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

   linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

   firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver


   #Metadata

   service_quantum_metadata_proxy = True

   quantum_metadata_proxy_shared_secret = helloOpenStack

   metadata_host = 192.168.230.141

   metadata_listen = 127.0.0.1

   metadata_listen_port = 8775


   # Compute #

   compute_driver=libvirt.LibvirtDriver


   # Cinder #

   volume_api_class=nova.volume.cinder.API

   osapi_volume_listen_port=5900


# restart nova service

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done


# nova service status

nova-manage service list



[ Nova 명령어 실행 ]

# admin 권한으로 실행

source creds


# tenant, user 생성

keystone tenant-create --name myproject

keystone role-list

keystone user-create --name=myuser --pass=임시 패스워드 --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 --email=myuser@domain.com

keystone user-role-add --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 --user-id 29736a14d7d4471fa50ca04da38d89b1 --role-id 022cd675521b45ffb94693e7cab07db7


# Network 생성

quantum net-create --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 net_myproject

quantum net-list


# Network 에 internal private subnet 생성

quantum subnet-create --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 --name net_myproject_internal net_myproject 10.0.0.0/24


# Router 생성

quantum router-create --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 net_myproject_router


# L3 agent 를 Router 와 연결

quantum l3-agent-router-add 829f424b-0879-4fee-a373-84c0f0bcbb9b net_myproject_router


# Router 를 Subnet 에 연결

quantum router-interface-add f3e2c02e-2146-4388-b415-c95d45f4f3a3 99189c7b-50cd-4353-9358-2dd74efbb762


# restart quantum services

cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done


# 환경설정파일 생성

vi myproject

export OS_TENANT_NAME=myproject

export OS_USERNAME=myuser

export OS_PASSWORD=임시 패스워드

export OS_AUTH_URL="http://192.168.230.141:5000/v2.0/"


# project 권한으로 진행

source myproject








nova image-list

nova secgroup-list

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

ssh-keygen

nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey

nova keypair-list

nova flavor-list

nova boot test01 --flavor 1 --image 5c4c2339-55bd-4e9b-86cb-23694e3b9b17 --key_name mykey --security_group default


nova floating-ip-list

nova floating-ip-create

nova add-floating-ip 80eb7545-258e-4f26-a842-c1993cb03ae5 192.168.75.225

nova remove-floating-ip 80eb7545-258e-4f26-a842-c1993cb03ae5 192.168.75.225

nova floating-ip-delete 192.168.75.225


nova volume-list

nova volume-create --display_name ebs01 1

nova volume-attach 80eb7545-258e-4f26-a842-c1993cb03ae5 c209e2f1-5ff7-496c-8928-d57487d86c6f /dev/vdb

nova volume-detach 80eb7545-258e-4f26-a842-c1993cb03ae5 a078f20a-62c6-432c-8fa2-7cfd9950a64f

nova volume-delete a078f20a-62c6-432c-8fa2-7cfd9950a64f


# 접속 후 ext4 로 format 및 mount

mke2fs -t ext4 /dev/vdb

mount /dev/vdb /test



[ vnc console 접속 ]

nova get-vnc-console 80eb7545-258e-4f26-a842-c1993cb03ae5 novnc





반응형
Posted by seungkyua@gmail.com
,
반응형

[ OpenStack Contribution List ]


1. Flavor Type 별 Network QoS 지정


2. Task 별 API 체크


3. Scheduler

    - Filter 와 가중치를 적용하는 방법


4. EBS Backup

    - EBS를 Snapshot 방식이 아닌 File 방식을 사용하여 Incremental하게  Backup을 하는 서비스


5. ENI (Elastic Network Interface)


6. Project to Host Filter Scheduling


7. EBS 기반으로 Boot 된 VM 에 대한 Live Migration



반응형
Posted by seungkyua@gmail.com
,

nova variable

OpenStack/Nova 2012. 7. 21. 17:33
반응형
vi /usr/lib/python2.7/json/encoder.py

import datetime           (4 line 추가)
...
...
elif isinstance(o, datetime.datetime):   (431 line 추가)
    pass
elif o.__module__.startswith('nova'):
    yield str(o)

Json 으로 변환하여 print 하기
import json
import nova.openstack.common import jsonutils  (json 혹은 jsonutils 사용)
...
LOG.debug("image_service = %s", jsonutils.dumps(jsonutils.to_primitive(vars(image_service)), indent=2))


nova.api.openstack.compute.servers.py >> Controller >> create()

inst_type = {
  "memory_mb": 512,
  "root_gb": 0,
  "deleted_at": null,
  "name": "m1.tiny",
  "deleted": false,
  "created_at": null,
  "ephemeral_gb": 0,
  "updated_at": null,
  "disabled": false,
  "vcpus": 1,
  "extra_specs": {},
  "swap": 0,
  "rxtx_factor": 1.0,
  "is_public": true,
  "flavorid": "1",
  "vcpu_weight": null,
  "id": 2
}
image_href = "5c4c2339-55bd-4e9b-86cb-23694e3b9b17"
display_name = "test02"
display_description = "test02"
key_name = "mykey"
metadata = {}
access_ip_v4 = null
access_ip_v6 = null
injected_files = []
admin_password = "TbvbCd2NgA5S"
min_count = 1
max_count = 1
requested_networks = [
  [
    "0802c791-d4aa-473b-94a8-46d2b4aff91b",
    "192.168.100.5"
  ]
]
security_group = [
  "default"
]
user_data = null
availability_zone = null
config_drive = null
block_device_mapping = []
auto_disk_config = null
scheduler_hints = {}



nova.compute.api.py >> API >> _create_instance()

[ DB 신규 row 입력 ]

create_db_entry_for_new_instance

image_service = <nova.image.glance.GlanceImageService object at 0x588c450>

image_id = "5c4c2339-55bd-4e9b-86cb-23694e3b9b17"

image = {
  "status": "active",
  "name": "tty-linux",
  "deleted": false,
  "container_format": "ami",
  "created_at": ,
  "disk_format": "ami",
  "updated_at": ,
  "id": "5c4c2339-55bd-4e9b-86cb-23694e3b9b17",
  "owner": "2ffae825c88b448bad4ef4d14f5c1204",
  "min_ram": 0,
  "checksum": "10047a119149e08fb206eea89832eee0",
  "min_disk": 0,
  "is_public": false,
  "deleted_at": null,
  "properties": {
    "kernel_id": "f14c0936-e591-4291-901f-239bc41fd3d6",
    "ramdisk_id": "cc111638-8590-4b5b-8759-f551017ea269"
  },
  "size": 25165824
}

context = {
  "project_name": "service",
  "user_id": "fa8ecb2a7110435daa10a5e9e459c7ca",
  "roles": [
    "admin",
    "member"
  ],
  "_read_deleted": "no",
  "timestamp": "2012-12-26T14:49:00.820425",
  "auth_token": "1f31ccc31d324ba88802826270772522",
  "remote_address": "192.168.75.137",
  "quota_class": null,
  "is_admin": true,
  "service_catalog": [
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:8776/v1/2ffae825c88b448bad4ef4d14f5c1204/v2.0",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:8776/v1/2ffae825c88b448bad4ef4d14f5c1204",
          "id": "82d6c5ae2899473c8aa77bd2ae99881b",
          "internalURL": "http://192.168.75.137:8776/v1/2ffae825c88b448bad4ef4d14f5c1204"
        }
      ],
      "type": "volume",
      "name": "volume"
    },
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:9292/v1",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:9292/v1",
          "id": "2e65219ddb4143b9b0a89c334a5177dc",
          "internalURL": "http://192.168.75.137:9292/v1"
        }
      ],
      "type": "image",
      "name": "glance"
    },
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:8774/v2/2ffae825c88b448bad4ef4d14f5c1204",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:8774/v2/2ffae825c88b448bad4ef4d14f5c1204",
          "id": "0e82d644a5cb47b1890f81bf67b43dec",
          "internalURL": "http://192.168.75.137:8774/v2/2ffae825c88b448bad4ef4d14f5c1204"
        }
      ],
      "type": "compute",
      "name": "nova"
    },
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:35357/v2.0",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:5000/v2.0",
          "id": "2d85bf25bb7e4e6a82efa67063d51ac1",
          "internalURL": "http://192.168.75.137:5000/v2.0"
        }
      ],
      "type": "identity",
      "name": "keystone"
    }
  ],
  "request_id": "req-bda14315-16de-4b23-8d53-24745f87fdad",
  "instance_lock_checked": false,
  "project_id": "2ffae825c88b448bad4ef4d14f5c1204",
  "user_name": "admin"
}

request_spec = {
  "block_device_mapping": [],
  "image": {
    "status": "active",
    "name": "tty-linux",
    "deleted": false,
    "container_format": "ami",
    "created_at": "2012-11-30T07:51:06.000000",
    "disk_format": "ami",
    "updated_at": "2012-11-30T07:51:07.000000",
    "properties": {
      "kernel_id": "f14c0936-e591-4291-901f-239bc41fd3d6",
      "ramdisk_id": "cc111638-8590-4b5b-8759-f551017ea269"
    },
    "min_disk": 0,
    "min_ram": 0,
    "checksum": "10047a119149e08fb206eea89832eee0",
    "owner": "2ffae825c88b448bad4ef4d14f5c1204",
    "is_public": false,
    "deleted_at": null,
    "id": "5c4c2339-55bd-4e9b-86cb-23694e3b9b17",
    "size": 25165824
  },
  "instance_type": {
    "memory_mb": 512,
    "root_gb": 0,
    "deleted_at": null,
    "name": "m1.tiny",
    "deleted": false,
    "created_at": null,
    "ephemeral_gb": 0,
    "updated_at": null,
    "disabled": false,
    "vcpus": 1,
    "extra_specs": {},
    "swap": 0,
    "rxtx_factor": 1.0,
    "is_public": true,
    "flavorid": "1",
    "vcpu_weight": null,
    "id": 2
  },
  "instance_properties": {
    "vm_state": "building",
    "availability_zone": null,
    "ramdisk_id": "cc111638-8590-4b5b-8759-f551017ea269",
    "instance_type_id": 2,
    "user_data": null,
    "vm_mode": null,
    "reservation_id": "r-sviqmkvr",
    "user_id": "fa8ecb2a7110435daa10a5e9e459c7ca",
    "display_description": "test02",
    "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDPrhT0VICqukep0Zl3lz+ZvzZOKVwBEa9IFk2rUcDnjse9zGPy9bZHorEoGYwiywOTTC+Q422rIhAJQvev7OKF4qViyndbLPrlZudeA7oFBc2I0rqUmSwrmQv1Pz4h8jrMdgelgWS1QDPgyFp3O72sS9wP0yQMZIneSdLIV2SxrxVxsISYL5GhbF/A7G9ejSRmLoZgQoDmDW+CtIHFX8EsDDC9K94Dz9F3UCMZwCGGRO4S2o+wValsAuE0xLUF8U6VJ86NrILEJYvNVXPeKyQl9Ktuow0LWqjxtnLv78R/5ayKff+bX/7cekNzG8yeTog7it4kdKaitIb+G5j+h7T nova@ubuntu\n",
    "power_state": 0,
    "progress": 0,
    "project_id": "2ffae825c88b448bad4ef4d14f5c1204",
    "config_drive": "",
    "ephemeral_gb": 0,
    "access_ip_v6": null,
    "access_ip_v4": null,
    "kernel_id": "f14c0936-e591-4291-901f-239bc41fd3d6",
    "key_name": "mykey",
    "display_name": "test02",
    "config_drive_id": "",
    "architecture": null,
    "root_gb": 0,
    "locked": false,
    "launch_time": "2012-12-26T14:42:55Z",
    "memory_mb": 512,
    "vcpus": 1,
    "image_ref": "5c4c2339-55bd-4e9b-86cb-23694e3b9b17",
    "root_device_name": null,
    "auto_disk_config": null,
    "os_type": null,
    "metadata": {}
  },
  "security_group": [
    "default"
  ],
  "instance_uuids": [
    "55c4f897-11a7-457b-9b70-c8ef28549711"
  ]
}

admin_password = "5godsYKky8AR"
injected_files = []
requested_networks = [
  [
    "0802c791-d4aa-473b-94a8-46d2b4aff91b",
    "192.168.100.5"
  ]
]
filter_properties = {
  "scheduler_hints": {}
}


nova.sheduler.filter_scheduler.py >> FilterScheduler >> schedule_run_instance()


nova.compute.manager.py >> ComputeManager >> _run_instance()

request_spec = {

  "block_device_mapping": [],

  "image": {

    "status": "active",

    "name": "tty-linux",

    "deleted": false,

    "container_format": "ami",

    "created_at": "2012-12-16T10:37:48.000000",

    "disk_format": "ami",

    "updated_at": "2012-12-16T10:37:49.000000",

    "properties": {

      "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

      "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78"

    },

    "min_disk": 0,

    "min_ram": 0,

    "checksum": "10047a119149e08fb206eea89832eee0",

    "owner": "0c74b5d96202433196af2faa9bff4bde",

    "is_public": false,

    "deleted_at": null,

    "id": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

    "size": 25165824

  },

  "instance_type": {

    "memory_mb": 512,

    "root_gb": 0,

    "deleted_at": null,

    "name": "m1.tiny",

    "deleted": false,

    "created_at": null,

    "ephemeral_gb": 0,

    "updated_at": null,

    "disabled": false,

    "vcpus": 1,

    "extra_specs": {},

    "swap": 0,

    "rxtx_factor": 1.0,

    "is_public": true,

    "flavorid": "1",

    "vcpu_weight": null,

    "id": 2

  },

  "instance_properties": {

    "vm_state": "building",

    "availability_zone": null,

    "launch_time": "2012-12-24T16:45:50Z",

    "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

    "instance_type_id": 2,

    "user_data": null,

    "vm_mode": null,

    "reservation_id": "r-gzio9556",

    "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

    "display_description": "test01",

    "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiyiud+EmmdRZ50aPPbC7Ys3Td19qp6q3Xnl+W8aFHJ21IbdnCNXZo3pXpeTJy8rvFTitYxpvD5WzGlmPdXoEryJibA6hbPg6hPLINul+SwtuXlqv6pucy+eMVuWhi9MfOKv/uuJpCFIwZuEHGHg3xeW6uVyWSURW9FGH/E6tKdGrB9T2afkPaROOBnK2BRy3Bj55ExZq8qjfsYKDibwoDPddW9rR5zRn7N3pY6rhnULjyWJAd7Ll3UltKMkl3V2BZV0cyvd3c+TMtVtaa8hE9ComrxKOucd84d2+dOyUaV8hr3N3sfe/oXnvlK23Uo9TKwmYfXvTykOtAtaYRss/z nova@folsom\n",

    "power_state": 0,

    "progress": 0,

    "project_id": "0c74b5d96202433196af2faa9bff4bde",

    "config_drive": "",

    "ephemeral_gb": 0,

    "access_ip_v6": null,

    "access_ip_v4": null,

    "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

    "key_name": "mykey",

    "display_name": "test01",

    "config_drive_id": "",

    "architecture": null,

    "root_gb": 0,

    "locked": false,

    "launch_index": 0,

    "memory_mb": 512,

    "vcpus": 1,

    "image_ref": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

    "root_device_name": null,

    "auto_disk_config": null,

    "os_type": null,

    "metadata": {}

  },

  "security_group": [

    "default"

  ],

  "instance_uuids": [

    "1be889ba-fe3b-4eb6-8730-157db1582f88"

  ]

}


filter_properties = {

  "config_options": {},

  "limits": {

    "memory_mb": 3000.0

  },

  "request_spec": {

    "block_device_mapping": [],

    "image": {

      "status": "active",

      "name": "tty-linux",

      "deleted": false,

      "container_format": "ami",

      "created_at": "2012-12-16T10:37:48.000000",

      "disk_format": "ami",

      "updated_at": "2012-12-16T10:37:49.000000",

      "properties": {

        "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

        "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78"

      },

      "min_disk": 0,

      "min_ram": 0,

      "checksum": "10047a119149e08fb206eea89832eee0",

      "owner": "0c74b5d96202433196af2faa9bff4bde",

      "is_public": false,

      "deleted_at": null,

      "id": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

      "size": 25165824

    },

    "instance_type": {

      "memory_mb": 512,

      "root_gb": 0,

      "deleted_at": null,

      "name": "m1.tiny",

      "deleted": false,

      "created_at": null,

      "ephemeral_gb": 0,

      "updated_at": null,

      "disabled": false,

      "vcpus": 1,

      "extra_specs": {},

      "swap": 0,

      "rxtx_factor": 1.0,

      "is_public": true,

      "flavorid": "1",

      "vcpu_weight": null,

      "id": 2

    },

    "instance_properties": {

      "vm_state": "building",

      "availability_zone": null,

      "launch_time": "2012-12-24T16:45:50Z",

      "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

      "instance_type_id": 2,

      "user_data": null,

      "vm_mode": null,

      "reservation_id": "r-gzio9556",

      "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

      "display_description": "test01",

      "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiyiud+EmmdRZ50aPPbC7Ys3Td19qp6q3Xnl+W8aFHJ21IbdnCNXZo3pXpeTJy8rvFTitYxpvD5WzGlmPdXoEryJibA6hbPg6hPLINul+SwtuXlqv6pucy+eMVuWhi9MfOKv/uuJpCFIwZuEHGHg3xeW6uVyWSURW9FGH/E6tKdGrB9T2afkPaROOBnK2BRy3Bj55ExZq8qjfsYKDibwoDPddW9rR5zRn7N3pY6rhnULjyWJAd7Ll3UltKMkl3V2BZV0cyvd3c+TMtVtaa8hE9ComrxKOucd84d2+dOyUaV8hr3N3sfe/oXnvlK23Uo9TKwmYfXvTykOtAtaYRss/z nova@folsom\n",

      "power_state": 0,

      "progress": 0,

      "project_id": "0c74b5d96202433196af2faa9bff4bde",

      "config_drive": "",

      "ephemeral_gb": 0,

      "access_ip_v6": null,

      "access_ip_v4": null,

      "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

      "key_name": "mykey",

      "display_name": "test01",

      "config_drive_id": "",

      "architecture": null,

      "root_gb": 0,

      "locked": false,

      "launch_index": 0,

      "memory_mb": 512,

      "vcpus": 1,

      "image_ref": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

      "root_device_name": null,

      "auto_disk_config": null,

      "os_type": null,

      "metadata": {}

    },

    "security_group": [

      "default"

    ],

    "instance_uuids": [

      "1be889ba-fe3b-4eb6-8730-157db1582f88"

    ]

  },

  "instance_type": {

    "memory_mb": 512,

    "root_gb": 0,

    "deleted_at": null,

    "name": "m1.tiny",

    "deleted": false,

    "created_at": null,

    "ephemeral_gb": 0,

    "updated_at": null,

    "disabled": false,

    "vcpus": 1,

    "extra_specs": {},

    "swap": 0,

    "rxtx_factor": 1.0,

    "is_public": true,

    "flavorid": "1",

    "vcpu_weight": null,

    "id": 2

  },

  "retry": {

    "num_attempts": 1,

    "hosts": [

      "folsom"

    ]

  },

  "scheduler_hints": {}

}


requested_networks[

  [

    "0802c791-d4aa-473b-94a8-46d2b4aff91b",

    "192.168.100.5"

  ]

]

injected_files = []

admin_password = "6Ty7wZA9wc5w"

is_first_time = true


instance = {

  "vm_state": "building",

  "availability_zone": null,

  "terminated_at": null,

  "ephemeral_gb": 0,

  "instance_type_id": 2,

  "user_data": null,

  "vm_mode": null,

  "deleted_at": null,

  "reservation_id": "r-gzio9556",

  "id": 4,

  "security_groups": [

    {

      "project_id": "0c74b5d96202433196af2faa9bff4bde",

      "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

      "name": "default",

      "deleted": false,

      "created_at": "2012-12-16T11:47:01.000000",

      "updated_at": null,

      "rules": [

        {

          "from_port": 22,

          "protocol": "tcp",

          "deleted": false,

          "created_at": "2012-12-16T11:47:26.000000",

          "updated_at": null,

          "id": 1,

          "to_port": 22,

          "parent_group_id": 1,

          "cidr": "0.0.0.0/0",

          "deleted_at": null,

          "group_id": null

        },

        {

          "from_port": -1,

          "protocol": "icmp",

          "deleted": false,

          "created_at": "2012-12-16T11:47:41.000000",

          "updated_at": null,

          "id": 2,

          "to_port": -1,

          "parent_group_id": 1,

          "cidr": "0.0.0.0/0",

          "deleted_at": null,

          "group_id": null

        }

      ],

      "deleted_at": null,

      "id": 1,

      "description": "default"

    }

  ],

  "disable_terminate": false,

  "root_device_name": null,

  "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

  "uuid": "1be889ba-fe3b-4eb6-8730-157db1582f88",

  "server_name": null,

  "default_swap_device": null,

  "info_cache": {

    "instance_uuid": "1be889ba-fe3b-4eb6-8730-157db1582f88",

    "deleted": false,

    "created_at": "2012-12-24T16:45:50.000000",

    "updated_at": null,

    "network_info": "[]",

    "deleted_at": null,

    "id": 4

  },

  "hostname": "test01",

  "launched_on": null,

  "display_description": "test01",

  "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiyiud+EmmdRZ50aPPbC7Ys3Td19qp6q3Xnl+W8aFHJ21IbdnCNXZo3pXpeTJy8rvFTitYxpvD5WzGlmPdXoEryJibA6hbPg6hPLINul+SwtuXlqv6pucy+eMVuWhi9MfOKv/uuJpCFIwZuEHGHg3xeW6uVyWSURW9FGH/E6tKdGrB9T2afkPaROOBnK2BRy3Bj55ExZq8qjfsYKDibwoDPddW9rR5zRn7N3pY6rhnULjyWJAd7Ll3UltKMkl3V2BZV0cyvd3c+TMtVtaa8hE9ComrxKOucd84d2+dOyUaV8hr3N3sfe/oXnvlK23Uo9TKwmYfXvTykOtAtaYRss/z nova@folsom\n",

  "deleted": false,

  "scheduled_at": "2012-12-24T16:45:50.413093",

  "power_state": 0,

  "default_ephemeral_device": null,

  "progress": 0,

  "project_id": "0c74b5d96202433196af2faa9bff4bde",

  "launched_at": null,

  "config_drive": "",

  "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

  "access_ip_v6": null,

  "access_ip_v4": null,

  "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

  "key_name": "mykey",

  "updated_at": "2012-12-24T16:45:50.441013",

  "host": null,

  "display_name": "test01",

  "task_state": "scheduling",

  "shutdown_terminate": false,

  "root_gb": 0,

  "locked": false,

  "name": "instance-00000004",

  "created_at": "2012-12-24T16:45:50.000000",

  "launch_index": 0,

  "memory_mb": 512,

  "instance_type": {

    "memory_mb": 512,

    "root_gb": 0,

    "name": "m1.tiny",

    "deleted": false,

    "created_at": null,

    "ephemeral_gb": 0,

    "updated_at": null,

    "disabled": false,

    "vcpus": 1,

    "flavorid": "1",

    "swap": 0,

    "rxtx_factor": 1.0,

    "is_public": true,

    "deleted_at": null,

    "vcpu_weight": null,

    "id": 2

  },

  "vcpus": 1,

  "image_ref": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

  "architecture": null,

  "auto_disk_config": null,

  "os_type": null,

  "metadata": []

}


image_meta = {

  "status": "active",

  "name": "tty-linux",

  "deleted": false,

  "container_format": "ami",

  "created_at": "2012-12-16T10:37:48.000000",

  "disk_format": "ami",

  "updated_at": "2012-12-16T10:37:49.000000",

  "properties": {

    "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

    "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78"

  },

  "min_disk": 0,

  "min_ram": 0,

  "checksum": "10047a119149e08fb206eea89832eee0",

  "owner": "0c74b5d96202433196af2faa9bff4bde",

  "is_public": false,

  "deleted_at": null,

  "id": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

  "size": 25165824

}


network_info = [

  {

    "network": {

      "bridge": "br100",

      "subnets": [

        {    

          "ips": [

            {    

              "meta": {},

              "version": 4,

              "type": "fixed",

              "floating_ips": [],

              "address": "192.168.100.2"

            }    

          ],   

          "version": 4,

          "meta": {

            "dhcp_server": "192.168.100.1"

          },   

          "dns": [

            {    

              "meta": {},

              "version": 4,

              "type": "dns",

              "address": "8.8.8.8"

            }    

          ],   

          "routes": [],

          "cidr": "192.168.100.0/24",

          "gateway": {

            "meta": {},

            "version": 4,

            "type": "gateway",

            "address": "192.168.100.1"

          }    

        },   

        {    

          "ips": [],

          "version": null,

          "meta": {

            "dhcp_server": null

          },   

          "dns": [],

          "routes": [],

          "cidr": null,

          "gateway": {

            "meta": {},

            "version": null,

            "type": "gateway",

            "address": null

          }    

        } 

      ],

      "meta": {

        "tenant_id": null,

        "should_create_bridge": true,

        "bridge_interface": "br100"

      },

      "id": "da8b8d70-6522-495a-b9f7-9bfadb931a8f",

      "label": "private"

    },

    "meta": {},

    "id": "fe9cd80f-c807-4869-9933-cafce241ac0e",

    "address": "fa:16:3e:31:f5:00"

  }

]


block_device_info = {

  "block_device_mapping": [],

  "root_device_name": null,

  "ephemerals": [],

  "swap": null

}


injected_files = []


nova.compute.manager.py >> ComputeManager >> _allocate_network()


vm_states = BUILDING

task_states = NETWORKING

expected_task_states = None



    nova.network.api.py >> API >> allocate_for_instance()


    nova.network.manager.py >> NetworkManager >> allocate_for_instance()


    nova.network.manager.py >> NetworkManager >> _allocate_mac_address()


    nova.network.manager.py >> RPCAllocateFixedIP >> _allocate_fixed_ips()


    nova.network.manager.py >> NetworkManager >> get_instance_nw_info()



nova.compute.manager.py >> ComputeManager >> _prep_block_device()


vm_states = BUILDING

task_states = BLOCK_DEVICE_MAPPING


nova.compute.manager.py >> ComputeManager >> _spawn()


[ VM 생성 시작할 때 ]

vm_states = BUILDING

task_states = SPAWNING

expected_task_states = BLOCK_DEVICE_MAPPING


[ 생성 종료된 후 ]

power_state = current_power_state

vm_state = ACTIVE

task_state = None

expected_task_states = SPAWNING


nova.virt.libvirt.driver.py >> LibvirtDriver >> spawn()





















반응형
Posted by seungkyua@gmail.com
,