Compute IP          : 172.23.147.187

가상 IP  NAT         :  192.168.75.0

가상 IP  Host-Only :  192.168.230.0


1. HP Helion OpenStack Community Version 다운로드

https://helion.hpwsportal.com/catalog.html#/Home/Show

# mkdir -p /root/work

# tar -xzvf HP_Helion_OpenStack_1.1.1.tgz -C /root/work



2. 설치 문서

http://docs.hpcloud.com/helion/community/install-virtual/


3. sudo 세팅

$ sudo visudo

stack   ALL=(ALL:ALL) NOPASSWD: ALL


4. root 접속 및 rsa 키 생성

$ sudo su -

ssh-keygen -t rsa


# s/w 설치

# apt-get update

# apt-get dist-upgrade

# sudo su -l -c "apt-get install -y qemu-kvm libvirt-bin openvswitch-switch openvswitch-common python-libvirt qemu-system-x86 ntpdate ntp openssh-server"


5. ntp 서버 설정

# ntpdate -u time.bora.net

# vi /etc/ntp.conf

...

#server 0.ubuntu.pool.ntp.org

#server 1.ubuntu.pool.ntp.org

#server 2.ubuntu.pool.ntp.org

#server 3.ubuntu.pool.ntp.org

server time.bora.net

...

restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap



# Use Ubuntu's ntp server as a fallback.

#server ntp.ubuntu.com

server 127.127.1.0

...


sudo /etc/init.d/ntp restart

# ntpq -p                             # ntp 상태 보기

# dpkg-reconfigure ntp         # ntp 에러 날 때




5. unpacking

# mkdir work

# cd work

tar zxvf /{full path to downloaded file from step 2}/Helion_Openstack_Community_V1.4.tar.gz



7. VM 사양 조정

vi /root/vm_plan.csv

,,,,2,4096,512,Undercloud

,,,,2,24576,512,OvercloudControl

,,,,2,8192,512,OvercloudSwiftStorage

,,,,4,16384,512,OvercloudCompute



6. start seed vm

export SEED_NTP_SERVER=192.168.122.1

export NODE_MEM=4096

HP_VM_MODE=y bash -x /root/work/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --create-seed --vm-plan /root/vm_plan.csv 2>&1|tee seedvminstall.log



7. Under Cloud, Over Cloud 생성

# seed vm 접속

ssh 192.0.2.1


# 변수 세팅

# export OVERCLOUD_CONTROLSCALE=1

export OVERCLOUD_SWIFTSTORAGESCALE=1

export OVERCLOUD_SWIFT_REPLICA_COUNT=1

export ENABLE_CENTRALIZED_LOGGING=0

export USE_TRICKLE=0

export OVERCLOUD_STACK_TIMEOUT=240

export UNDERCLOUD_STACK_TIMEOUT=240

export OVERCLOUD_NTP_SERVER=192.168.122.1

export UNDERCLOUD_NTP_SERVER=192.168.122.1

export FLOATING_START=192.0.8.140

export FLOATING_END=192.0.8.240

export FLOATING_CIDR=192.0.8.0/21

export OVERCLOUD_NEUTRON_DVR=False



# 로케일 변경

export LANGUAGE=en_US.UTF-8

export LANG=en_US.UTF-8

export LC_ALL=en_US.UTF-8



# start Under Cloud

bash -x tripleo/tripleo-incubator/scripts/hp_ced_installer.sh 2>&1|tee stackinstall.log



8. 아래 IP 확인

OVERCLOUD_IP_ADDRESS  : 192.0.2.23

UNDERCLOUD_IP_ADDRESS  : 192.0.2.2



9. 설치 확인하기

# demo, admin 유저의 패스워드 확인

cat /root/tripleo/tripleo-undercloud-passwords

cat /root/tripleo/tripleo-overcloud-passwords


10. seed vm에 접속한 후 undercloud ip 보기

# . /root/stackrc

UNDERCLOUD_IP=$(nova list | grep "undercloud" | awk ' { print $12 } ' | sed s/ctlplane=// )

echo $UNDERCLOUD_IP


11. seed vm 에서 overcloud ip 보기

. /root/tripleo/tripleo-overcloud-passwords

TE_DATAFILE=/root/tripleo/ce_env.json

. /root/tripleo/tripleo-incubator/undercloudrc

OVERCLOUD_IP=$(heat output-show overcloud KeystoneURL | cut -d: -f2 | sed s,/,,g )

# echo $OVERCLOUD_IP



[ OverCloud 내 VM 이 인터넷 연결이 될 수 있도록 수정]

0. DNS change (overcloud)

/etc/resolv.conf


1. security rule check (overcloud)



2. ip forward (host, seed, undercloud, overcloud)
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.ip_forward = 1


3. br-tun, br-int, br-ex up (host, seed, overcloud, compute)
ip link set br-tun up
ip link set br-ex up
ip link set br-int up


4. Host iptables NAT add
iptables -t nat -A POSTROUTING -s 192.0.8.0/21 ! -d 192.0.2.0/24 -j SNAT --to-source 172.23.147.187


5. Host iptables filter delete
iptables -D FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
iptables -D FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable



6. Host iptables NAT DNAT port change

# overcloud Horizon port forwarding

iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.0.2.21


# ALS port forwarding

iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.0.8.143




13. Host 에서 콘솔 접속할 수 있게 열기

# ssh 192.0.2.1 -R 443:<overcloud IP>:443 -L <laptop IP>:443:127.0.0.1:443

# ssh 192.0.2.1 -R 443:192.0.2.24:443 -L 172.23.147.187:443:127.0.0.1:443



14. connecting to the demo vm

# ssh debian@192.0.8.141



15. overcloud scheduler memory ratio 변경

# ssh heat-admin@192.0.2.23                  # overcloud-controllerMgmt

$ sudo su -

# vi /etc/nova/nova.conf

...

ram_allocation_ratio=100

...

# restart nova-scheduler

# exit


# 다른 over cloud 도 수정

# ssh heat-admin@192.0.2.27             # overcloud-controller0

# ssh heat-admin@192.0.2.28             # overcloud-controller1





16. monitoring 접속

http://<under cloud ip>/icinga           # icingaadmin / icingaadmin



17. undercloud logging 에 접속하기위해 Kibana 패스워드 알기

ssh heat-admin@<undercloud IP>

cat  /opt/kibana/htpasswd.cfg

http://<under cloud ip>:81                   # kibana / ?????




# vm 백업

# tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --save-vms


# vm Recover

# tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --resume-vms





[ HDP install ]


1. HP Development Platform Community Version 다운로드

https://helion.hpwsportal.com/catalog.html#/Home/Show


2. HDP 설치 문서

https://docs.hpcloud.com/helion/devplatform/install/community


* Host(base) 에서 설치하거나 Seed 에서 설치할 수 있음


3. Seed에서 필요 s/w 설치

# pip install cffi enum34 pyasn1 virtualenv

# scp -o StrictHostKeyChecking=no 192.0.2.21:/usr/local/share/ca-certificates/ephemeralca-cacert.crt /root/ephemeralca-cacert.crt


tar -zxvf hp_helion_devplatform_community.tar.gz

cd dev-platform-installer

# ./DevelopmentPlatform_Enable.sh \
    -u admin \
    -p bd9352ceed184839e2231d2a13062d461928b857 \     # admin-password
    -a 192.0.2.21 \                                                                           # overcloud ip
    -i c1821d8687f14fd4b74c11892f5d7af0 \                            # tenant-id
    -e /root/ephemeralca-cacert.crt \



3. Host(Base)에 필요 s/w 설치

# sudo apt-get install -y python-dev libffi-dev libssl-dev python-virtualenv python-pip

# mkdir -p hdp_work

# cd hdp_work

tar -zxvf /home/stack/Downloads/HDP/hp_helion_devplatform_community.tar.gz

cd dev-platform-installer

./DevelopmentPlatform_Setup.sh -p {admin_user_password} -a {auth_keystone_ip_address}

./DevelopmentPlatform_Setup.sh -p 2c0ee7b859261caf96a3069c60f516de1e3682c9 -a 192.0.2.21


혹은 아래와 같이 -n (username) -t (tenant name) 을 지정

# ./DevelopmentPlatform_Setup.sh -r regionOne -n admin -p 2c0ee7b859261caf96a3069c60f516de1e3682c9 -t admin -a '192.0.2.21'

# admin password 를 모를 경우 다음과 같이 실행

# cat /root/tripleo/tripleo-overcloud-passwords


# Keystone ip 를 모를 경우 다음과 같이 실행

# . /root/tripleo/tripleo-overcloud-passwords

# TE_DATAFILE=/root/tripleo/ce_env.json . /root/tripleo/tripleo-incubator/undercloudrc

# heat output-show overcloud KeystoneURL




5. cluster 설정을 위한 client tool 다운로드

http://docs.hpcloud.com/helion/devplatform/1.2/ALS-developer-trial-quick-start/2

cf-mgmt 와 ALS Client 다운로드

# host 에서 파일을 seed로 복사

$ unzip *.zip

$ scp helion-1.2.0.1-linux-glibc2.3-x86_64/helion root@192.0.2.1:client

$ scp linux-amd64/cf-mgmt root@192.0.2.1:client


# seed에서 수행

6. Create Cluster

$ vi ~/.profile

export PATH=$PATH:/root/client/cf-mgmt:/root/client/helion:.


$ cf-mgmt update









===========================   참고 ======================



5. VM을 위한 DNS 세팅

vi tripleo/hp_passthrough/overcloud_neutron_dhcp_agent.json

{"option":"dhcp_delete_namespaces","value":"True"},

{"option":"dnsmasq_dns_servers","value":"203.236.1.12,203.236.20.11"}


vi tripleo/hp_passthrough/undercloud_neutron_dhcp_agent.json

{"option":"dhcp_delete_namespaces","value":"True"},

{"option":"dnsmasq_dns_servers","value":"203.236.1.12,203.236.20.11"}



6. VM root disk 위치 수정

# mkdir -p /data/libvirt/images           # vm qcow2 이미지를 생성할 디렉토리 미리 생성

# vi /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh

...

IMAGES_DIR=${IMAGES_DIR:-"/data/libvirt/images"}    # 127 라인 디렉토리 변경

...


# virsh pool-dumpxml default > pool.xml


# vi pool.xml

<pool type='dir'>

  <name>default</name>

  <uuid>9690731d-e0d1-49d1-88a4-b25bccc78418</uuid>

  <capacity unit='bytes'>436400848896</capacity>

  <allocation unit='bytes'>2789785694208</allocation>

  <available unit='bytes'>18446741720324706304</available>

  <source>

  </source>

  <target>

    <path>/data/libvirt/images</path>

    <permissions>

      <mode>0711</mode>

      <owner>-1</owner>

      <group>-1</group>

    </permissions>

  </target>

</pool>


# virsh pool-destroy default

# virsh pool-create pool.xml



8. 아래 파일의 해당 라인의 IP 변경 : 192.0.8.0 -> 192.10.8.0,      192.0.15.0 -> 192.10.15.0

./tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh:800

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:70

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:71

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:72

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:181

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:182

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:183



# undercloud, overcloud 설치 시 변수 셋팅

# export OVERCLOUD_NEUTRON_DVR=False

# export OVERCLOUD_CINDER_LVMLOOPDEVSIZE=500000      # 필요시 필요한 양만큼


# seed locale 변경

# locale-gen en_US.UTF-8

# sudo dpkg-reconfigure locales          # 필요시



# 변수 세팅  (이건 Comm 버전에서 에러 날 때)

# vi ./tripleo/tripleo-incubator/scripts/hp_ced_setup_cloud_env.sh

...

export OVERCLOUD_CONTROLSCALE=${OVERCLOUD_CONTROLSCALE:-2}    40 라인 변경

...


13. vm dns 를 초반에 설정 못했을 때 변경하기

. /root/tripleo/tripleo-overcloud-passwords

TE_DATAFILE=/root/tripleo/ce_env.json

. /root/tripleo/tripleo-incubator/undercloudrc

# neutron subnet-list

neutron subnet-update --dns-nameserver 203.236.1.12 --dns-nameserver 203.236.20.11 c4316d44-e2ae-43fb-b462-40fa767bd9fb















Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요


[ devstack 사전 조건 ]
devstack 설치 시 충분한 cpu, memory, disk 가 있어야 함.


[ localrc ]

VOLUME_BACKING_FILE_SIZE=70000M


[ rabbitmq memory leak 해결 ]
$ sudo vi /etc/rabbitmq/rabbitmq-env.conf
#celery_ignore_result = true

$ sudo service rabbitmq-server restart


[ cpu, memory overcommit ]
$ vi /etc/nova/nova.conf

scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 100.0
disk_allocation_ratio = 100.0



[ vm 에 할당되는 DNS 변경 ]
$ neutron subnet-list
$ neutron subnet-update <subnet> --dns_nameservers list=true 8.8.8.8 8.8.4.4



[ Ubuntu Server 14.04 Image Upload ]
이름 : Ubuntu Server 14.04 64-bit
경로 : http://uec-images.ubuntu.com/releases/14.04.2/14.04.2/ubuntu-14.04-server-cloudimg-amd64-disk1.img
포맷 : QCOW2 - QEMU Emulator
최소 디스크 : 5
최소 RAM : 1024

아래 사이트 이미지 참고
https://help.ubuntu.com/community/UEC/Images
http://uec-images.ubuntu.com/releases/



[ OpenStack API 서버 접근 가능한지 테스트 ]

[ ruby 설치 ]
$ sudo apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev
$ sudo apt-get install liblzma-dev zlib1g-dev
$ ruby -v
$ nokogiri --version

$ sudo gem install fog
$ vi .fog

:openstack:
  :openstack_auth_url:  http://192.168.230.141:5000/v2.0/tokens
  :openstack_api_key:   패스워드
  :openstack_username:  admin
  :openstack_tenant:    demo
  :openstack_region:    RegionOne # Optional

$ fog openstack
>>Compute[:openstack].servers



[ OpenStack Metadata 서버 접속 확인 ]
curl http://169.254.169.254

[ fog 로 user_data 넣기 ]
$ fog openstack
>> s = Compute[:openstack].servers.create(name: 'test', flavor_ref: , image_ref: , personality: [{'path' => 'user_data.json', 'contents' => 'test' }])



[ OpenStack API call 제한 여부 확인 ]
$ fog openstack
>> 100.times { p Compute[:openstack].servers }


[ 큰 Volume 생성 테스트 ] 
1. 30G Volume 생성
2. instance 에 Volume attach

[ Volume Attach가 안되면 tgtd 이 떠 있는지 확인 ]
sudo netstat -tulpn | grep 3260
sudo service tgt start

3. 추가 볼륨 포맷
$ sudo fdisk -l
$ sudo fdisk /dev/vdb

Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048): ENTER
Last sector, +sectors or +size{K,M,G} (2048-62914559, default 62914559): ENTER
Command (m for help): t
Partition number (1-4, default 1): 1
Hex code (type L to list codes): 83
Command (m for help): w

sudo mkfs.ext3 /dev/vdb1
$ sudo mkdir /disk
$ sudo mount -t ext3 /dev/vdb1 /disk
$ cd /disk
$ sudo touch pla





MicroBOSH 설치를 위한 OpenStack 설정 ] 
$ mkdir ~/my-micro-deployment
$ cd my-micro-deployment

[ Nova Client 준비 ]
$ sudo apt-get install python-novaclient
$ unset OS_SERVICE_TOKEN
$ unset OS_SERVICE_ENDPOINT
$ vi adminrc
export OS_USERNAME=admin
export OS_PASSWORD=imsi00
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://192.168.230.141:35357/v2.0

1. keypair 생성 : microbosh
nova keypair-add microbosh >> microbosh.pem
$ chmod 600 microbosh.pem

2. Security Group 생성 : bosh
name : bosh
description : BOSH Security Group

3. Security Rule 입력
Direction Ether Type IP Protocol Port Range Remote
Ingress  IPv4          TCP                1-65535     bosh
Ingress  IPv4          TCP                 25777     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP         25555     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP         25250     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP                 6868     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP              4222     0.0.0.0/0 (CIDR)
Ingress  IPv4          UDP      68                0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP               53         0.0.0.0/0 (CIDR)
Ingress  IPv4          UDP         53         0.0.0.0/0 (CIDR)
Egress  IPv4          Any             -             0.0.0.0/0 (CIDR)
Egress  IPv6          Any         -             ::/0 (CIDR)


4. Allocate Floating IP



MicroBOSH 설치 ] 
1. yml 작성

$ vi manifest.yml

name: microbosh

network:
  type: manual
  vip: 192.168.75.206       # Replace with a floating IP address
  ip: 10.0.0.15    # subnet IP address allocation pool of OpenStack internal network
  cloud_properties:
    net_id: a34928c6-9715-4a91-911e-a6822afd600b # internal network UUID

resources:
  persistent_disk: 20000
  cloud_properties:
    instance_type: m1.medium

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://192.168.230.141:35357/v2.0   # Identity API endpoint
      tenant: demo          # Replace with OpenStack tenant name
      username: admin    # Replace with OpenStack username
      api_key: 패스워드      # Replace with your OpenStack password
      default_key_name: microbosh   # OpenStack Keypair name
      private_key: microbosh.pem     # Path to OpenStack Keypair private key
      default_security_groups: [bosh]

apply_spec:
  properties:
    director: {max_threads: 3}
    hm: {resurrector_enabled: true}
    ntp: [time.bora.net, 0.north-america.pool.ntp.org, 1.north-america.pool.ntp.org]


2. Bosh cli 설치
$ sudo gem install bosh_cli --no-ri --no-rdoc
$ sudo gem install bosh_cli_plugin_micro --no-ri --no-rdoc


3. Download stemcell
https://bosh.io/stemcells

[ Ubuntu Server 14.04 stemcell 다운로드 ]
https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

$ curl -k -L -J -O https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

혹은,

$ wget --no-check-certificate --content-disposition https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

4. MicroBosh Deploy
$ bosh micro deployment manifest.yml
bosh micro deploy bosh-stemcell-2986-openstack-kvm-ubuntu-trusty-go_agent.tgz


5. MicroBosh Undeploy
bosh micro delete


6. MicroBosh Redeploy
bosh micro deploy --update bosh-stemcell-2986-openstack-kvm-ubuntu-trusty-go_agent.tgz





[ MicroBosh VM에 Cloud Foundry 설치 (Waden기반) ]
1. vm 접속
    - id : vcap / c1oudc0w
    - sudo su -


2. Ruby 설치 & Bosh cli 설치
$ apt-get update
$ apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev
$ apt-get install liblzma-dev zlib1g-dev

gem install bosh_cli --no-ri --no-rdoc -r
gem install bosh_cli_plugin_micro --no-ri --no-rdoc -r


* gem upgrade 방법
wget http://production.cf.rubygems.org/rubygems/rubygems-2.4.8.tgz
tar xvfz rubygems-2.4.8.tgz
$ cd rubygems-2.4.8
ruby setup.rb

* gem 리모트 소스 추가
gem sources --add http://rubygems.org/


3. go 설치
$ mkdir -p CloudFoundry
$ cd CloudFoundry
wget --no-check-certificate https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz
$ tar -C /usr/local -xzf go1.4.2.linux-amd64.tar.gz
$ mkdir -p /usr/local/gopath

$ vi ~/.profile
export GOPATH=/usr/local/gopath
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

$ . ~/.profile
$ apt-get install git
go get github.com/cloudfoundry-incubator/spiff


4. Cloud Foundry 소스 다운로드
git clone https://github.com/cloudfoundry/cf-release.git
cd cf-release
$ ./update


5. Cloud Foundry Manual 설치
bosh target 192.168.75.206
   admin / admin


* /tmp 디렉토리 용량 확장
$ 추가 디스크 증설
mkfs.ext3 /dev/vdc
$ mkdir -p /tmp2
mount -t ext3 /dev/vdc /tmp2
$ mount --bind /tmp2 /tmp
$ chown root.root /tmp
$ chmod 1777 /tmp

* mount 취소
$ umount /tmp


bosh upload release releases/cf-212.yml


cp spec/fixtures/openstack/cf-stub.yml .





[ Bosh lite 설치 (Waden기반) on Mac ]
1. install vagrant
http://www.vagrantup.com/downloads.html


2. bosh lite 다운로드
$ git clone https://github.com/cloudfoundry/bosh-lite
$ cd bosh-lite


3. install VirtualBox
https://www.virtualbox.org/wiki/Downloads


4. start vagrant
vagrant up --provider=virtualbox


5. Target the Bosh Director and login admin/admin
bosh target 192.168.50.4 lite
$ bosh login


6. HomeBrew 및 Spiff 설치
xcode-select --install
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew tap xoebus/homebrew-cloudfoundry
brew install spiff


7. cloud foundry 설치
git clone https://github.com/cloudfoundry/cf-release
$ cd cf-release
$ ./update


8. Single Command Deploy
$ cd ~/CloudFoundry/bosh-list
$ ./bin/provision_cf


9. route 추가
$ ./bin/add-route


10. VM Restart 후에 container restart
$ bosh cck
Choose "2" to create missing VM:
"yes"




[ simple go webapp Deploy ]

$ ssh vcap@192.168.50.4     # password : c1oudc0w

$ bosh vms

$ cf api --skip--ssl-validation https://api.192.168.0.34.xip.io           # ha-proxy ip


$ cf login

Email> admin

Password> admin


$ cf create-org test-org

$ cf target -o test-org

$ cf create-space development

$ cf target -o test-org -s development


$ sudo apt-get update

$ sudo apt-get install git

$ sudo apt-get install golang

cf update-buildpack go_buildpack

git clone https://github.com/cloudfoundry-community/simple-go-web-app.git

$ cd simple-go-web-app

### 다른 buildpack https://github.com/cloudfoundry/go-buildpack.git

$ cf push simple-go-web -b https://github.com/kr/heroku-buildpack-go.git

$ cf apps             # app list

$ cf logs simple-go-web --recent     # app deploy log


# cf login

$ cf login -a http://api.10.244.0.34.xip.io -u admin -o test-org -s development















Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

  1. BlogIcon 사진우주 2015.06.30 15:43  댓글주소  수정/삭제  댓글쓰기

    Vagrant VM 접속하실때, vagrant ssh 명령어로 접속하시면.. 비번 없이 접근가능합니다..

    :)

    블로그 잘보고 있습니다. :)

PyCharms 설정

OpenStack 2015. 6. 9. 18:09

Mac 에서


1. virtualenv 설치하기

$ sudo pip install virtualenv


2. ffi.h 헤더 설치하기

$ xcode-select --install


3. virtualenv 생성

$ virtualenv .venv


4. virtualenv 로 들어가기

$ . ./.venv/bin/activate


5. 설치하기

(.venv)$ pip install -r requirements.txt


6. 환경 복사하기

(.venv)$ pip freeze > requirements.txt













Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

OpenStack Manual build

OpenStack 2015. 3. 9. 15:40

0. 환경 세팅

$ sudo easy_install --upgrade transifex-client

sudo apt-get install gnome-doc-utils


- transifex 환경 설정 파일 저장

$ vi ~/.transifexrc

[https://www.transifex.com]
hostname = https://www.transifex.com
password = 오픈스택패스워드
token =
username = skanddh@gmail.com

[https://www.transifex.net]
hostname = https://www.transifex.com
password = 오픈스택패스워드
token =
username = skanddh@gmail.com


1. git 으로 다운받기


2. transifex 로 부터 전체 최신 번역 다운받기

$ tx pull -f -l ko_KR

 

# install-guide 만 다운받고 싶을 때

$ tx pull -f -l ko_KR -r openstack-manuals-i18n.install-guide


2-1. 한글 폰트 설치

$ git clone https://github.com/stackforge/clouddocs-maven-plugin

cd clouddocs-maven-plugin/src/main/resources/fonts/

$ mkdir -p nanum-font

wget http://cdn.naver.com/naver/NanumFont/fontfiles/NanumFont_TTF_ALL.zip

unzip NanumFont_TTF_ALL.zip -d ~/Git/clouddocs-maven-plugin/src/main/resources/fonts/nanum-font/

clouddocs-maven-plugin/src/main/resources/cloud/fo

$ vi docbook.xsl


  <xsl:param name="bodyFont">

    <xsl:choose>

      <xsl:when test="starts-with(/*/@xml:lang, 'zh')">AR-PL-New-Sung</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ja')">TakaoGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko')">NanumGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko_KR')">NanumGothic</xsl:when>

      <xsl:otherwise>CartoGothic Std</xsl:otherwise>

    </xsl:choose>

  </xsl:param>


  <xsl:param name="monospace.font.family">

    <xsl:choose>

      <xsl:when test="$monospaceFont != ''"><xsl:value-of select="$monospaceFont"/></xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'zh')">AR-PL-New-Sung</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ja')">TakaoGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko')">NanumGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko_KR')">NanumGothic</xsl:when>

      <xsl:otherwise>monospace</xsl:otherwise>

    </xsl:choose>

  </xsl:param>


$ cd clouddocs-maven-plugin

$ vi pom.xml

...

<version>2.1.5-SNAPSHOT</version>

...

$ mvn clean install


3. install-guide 를 한국어 빌드하기

$ vi .tx/config


...

[openstack-manuals-i18n.common]

file_filter = doc/common/locale/<lang>.po

minimum_perc = 8

source_file = doc/common/locale/common.pot

source_lang = ko_KR

type = PO


[openstack-manuals-i18n.install-guide]
file_filter = doc/install-guide/locale/<lang>.po
minimum_perc = 75
source_file = doc/install-guide/locale/install-guide.pot
source_lang = ko_KR                # en -> ko_KR 로 변경
type = PO

...


$ cd doc/install-guide

$ mvn clean generate-sources

 

 

4. 3번이 안될 때 tox 로 한국어 빌드하기


- XML 파일을 읽어서 PO Template 파일(POT)로 만들기

./tools/generatepot doc/install-guide/


- POT Template 파일 각 언어에 맞게 PO 파일 만들기 (Transifex 에 올려서 번역 작업)


- generation 할 파일 폴더 선택

$ sudo pip install tox

$ tox -e py27

$ source .tox/py27/bin/activate

 

(py27) $ vi doc-tools-check-languages.conf


# 빌드시에 common 은 반드시 필요함

declare -A DIRECTORIES=(
["fr"]="common glossary user-guide image-guide"
["ja"]="common glossary image-guide install-guide user-guide user-guide-admin"
["pt_BR"]="common install-guide"
["zh_CN"]="common glossary arch-design image-guide"
["ko_KR"]="common install-guide"
)

# books to be built
declare -A BOOKS=(
["fr"]="user-guide image-guide"
["ja"]="image-guide install-guide user-guide user-guide-admin"
["pt_BR"]="install-guide"
["zh_CN"]="arch-design image-guide"
["ko_KR"]="install-guide"
)


# common 모듈의 ko_KR.po 파일이 만드시 있어야 함

(py27) $ cd doc/common/locale

(py27) $ cp ja.po ko_KR.po


- tox 로 번역된 메세지와 통합하여 DocBook 을 generate 한다. (generated 폴더 생성) 

(py27) $ tox -e checkniceness - to run the niceness tests (for example, to see extra whitespaces)

(py27) $ tox -e checksyntax - to run syntax checks
(py27) $ tox -e checkdeletions - to check that no deleted files are referenced
(py27) $ tox -e checkbuild - to actually build the manual(s). This will also generate a directory publish-docs that contains the built files for inspection. You can also use doc/local-files.html for looking at the manuals.
(py27) $ tox -e checklang - to check all translated manuals

 

혹은


$ tox -e buildlang -- ko_KR


- generated 폴더에서 PDF 파일 만들기

(py27) $ cd generated/ko_KR

(py27) $ vi pom.xml

 

  <modules>
    <!--module>admin-guide-cloud</module>
    <module>arch-design</module>
    <module>cli-reference</module>
    <module>config-reference</module>
    <module>glossary</module>
    <module>hot-reference</module>
    <module>image-guide</module-->
    <module>install-guide</module>
    <!--module>user-guide</module>
    <module>user-guide-admin</module-->
  </modules>

...

  <build>

    <plugins>

      <plugin>

        <groupId>com.rackspace.cloud.api</groupId>

        <artifactId>clouddocs-maven-plugin</artifactId>

        <!--version>2.1.3</version-->

        <version>2.1.5-SNAPSHOT</version>

      </plugin>

    </plugins>

  </build>


(py27) $ cd install-giude 

(py27) $ mvn clean generate-sources




[ RST 파일로 만들 경우 ]


1. Sphinx 설치

$ sudo pip install Sphinx


2. 각각의 RST 파일로 각각의 POT 파일 만들기 (Slicing)

sphinx-build -b gettext doc/playground-user-guide/source/ doc/playground-user-guide/source/locale/


3. 각각의 POT 파일을 하나의 POT 파일로 통합하기 (Slicing)

msgcat doc/playground-user-guide/source/locale/*.pot > doc/playground-user-guide/source/locale/playground-user-guide.pot


4. POT 파일을 Transifex에 올려서 언어에 맞게 PO 파일을 만들기 (번역 작업)

   - 이 작업은 commit 이 일어나면 jenkins 가 알아서 작업 해 줌

$ tx set --auto-local -r openstack-i18n.playground-user-guide "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/playground-user-guide.po"  --source-lang en --source-file doc/playground-user-guide/source/locale/playground-user-guide.pot -t PO --execute

$ tx push -s


5. 다운로드

tx set --auto-local -r openstack-i18n.playground-user-guide] "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/playground-user-guide.po"  --source-lang en --source-file doc/playground-user-guide/source/locale/playground-user-guide.pot -t PO --execute

$ tx pull -l ko_KR


6. build HTML


- 통합된 PO 파일을 작은 PO 파일로 나눔

msgmerge -o doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/A.po doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/playground-user-guide.po doc/playground-user-guide/source/locale/A.pot


- 작은 PO 파일을 작은 MO 파일로 변환

msgfmt "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/A.po" -o "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/A.mo"


- build HTML

sphinx-build -D "language='ko_KR' doc/playground-user-guide/source/ doc/playground-user-guide/build/html



* 소스 코드로 부터 PO 파일로 추출하기

$ pybabel extract -o nova/locale/nova.pot nova/



Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

[ Neutron Server 시작 ]

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log


[ nova boot 명령어 ]

nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e


[ vxlan 연결 여부 확인 ]

sudo ovs-ofctl show br-tun


[ controller node 의 neutron.conf 세팅 ]

nova_admin_tenant_id = service    # 이름이 아니라 tenant-id 를 넣어야 함



[ net 을 제거하는 순서 ]

1. router 와 subnet 의 인터페이스 제거

neutron router-interface-delete [router-id] [subnet-id]


2. subnet 삭제

neutron subnet-delete [subnet-id]


3. net 삭제

neutron net-delete [net-id]



[ net 을 생성할 때 vxlan 으로 생성 ]

neutron net-create demo-net --provider:network_type vxlan



[ security rule 등록 ]

neutron security-group-rule-create --protocol icmp --direction ingress default

neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default



[ route gw 등록 ]

route add -net "0.0.0.0/0" gw "10.0.0.1"



[ MTU 세팅 ]

1. /etc/network/interfaces 파일에 세팅

auto eth2

iface eth2 inet static

address 192.168.200.152

netmask 255.255.255.0

mtu 9000


$ sudo ifdown eth2

$ sudo ifup eth2


2. 동적으로 세팅 (리부팅 필요)

ifconfig eth2 mtu 9000

reboot



[ floatingip 추가 ]

$ neutron floatingip-create ext-net

$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]



[ pip 으로 설치할 수 있게 배포판 만들기 ]

$ sudo python setup.py sdist --formats=gztar



[ metadata 서비스 ]

1. metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨.

   compute-node 는 필요없음

   compute 의 vm 이 network node 의 qdhcp 를 gateway로 보고 호출함


2. controller 노드의 /etc/nova/nova.conf 수정

[neutron]

service_metadata_proxy=True


3. network 노드와 compute 노드의 /etc/neutron/metadata_agent.ini 수정

auth_region = regionOne   # RegionOne 으로 쓰면 에러


[ cirros vm 안에서]

$ wget http://169.254.169.254/latest/meta-data/instance-id




cat /etc/nova/nova.conf  | grep -v ^# | grep -v ^$ | grep metadata

cat /etc/neutron/metadata_agent.ini | grep -v ^# | grep -v ^$ | grep metadata

cat /etc/neutron/l3_agent.ini | grep -v ^# | grep -v ^$



[ controller 노드에서 metadata 바로 호출하기 ]

curl \

  -H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \

  -H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \

  -H 'x-instance-id-signature: 80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \

  http://localhost:8775/latest/meta-data


[ x-instance-id-signature 구하기 ]

>>> import hmac

>>> import hashlib

>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()

'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'

>>>


[ neutron server init script ]

1. /etc/init.d/neutron-server 에 파일 복사한 것 삭제


2. sudo vi /etc/init/neutron-server.conf


# vim:set ft=upstart ts=2 et:

description "Neutron API Server"

author "Chuck Short <zulcss@ubuntu.com>"


start on runlevel [2345]

stop on runlevel [!2345]


respawn


chdir /var/run


script

  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server

  exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \

    --config-file /etc/neutron/neutron.conf \

    --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \

    --log-file /var/log/neutron/neutron-server.log $CONF_ARG

end script









Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

python 정리

OpenStack 2015. 1. 29. 17:43

[ Software Architecture ]


1. Rest API & Queue

2. Source Directory Structure


[ package ]


1. SQLAlchemy : The Python SQL Toolkit and Object Relational Mapper



[ Programming ]

1. Dynamically importing modules


# import os

def import_module(import_str):

    __import__(import_str)          # ImportError

    return sys.modules[import_str]  # KeyError


os = import_module("os")

os.getcwd()


# import versioned_submodule

module = 'mymodule.%s' % version

module = '.'.join((module, submodule))

import_module(module)


# import class

import_value = "nova.db.sqlalchemy.models.NovaBase"

mod_str, _sep, class_str = import_value.rpartition('.')

novabase_class = getattr(sys.modules[mod_str], class_str)

novabase_class()




2. Data Access Object (DAO)


nova.db.base.Base

    def __init__(...)

        self.db import_module(db_driver)



# 사용 예

Manager(base.Base)

...

self.db.instance_update(...)


self.db 는 Base

db_driver 는 "nova.db" 패키지

nova.db 패키지의 __init__ 은 from nova.db.api import *

그러므로 self.db = nova.db.api


nova.db.api

IMPL = nova.db.sqlalchemy.api

def instance_update(...)

   IMPL.instance_update(...)




3. Configuration 활용


nova.db.sqlalchemy.api 에서 Configration 사용

CONF = cfg.CONF

CONF.compute_topic 과 같이 사용



oslo.config.cfg 패키지

CONF = ConfigOpts()



Opt

name = 이름

type = StrOpt, IntOpt, FloatOpt, BoolOpt,  List,  DictOpt, IPOpt, MultiOpt, MultiStrOpt

dest : ConfigOpts property 와 대응대는 이름

default : 기본값


ConfigOpts(collections.Mapping)

    def __init__(self):

        self._opts = {}     # dict of dicts of (opt:,  override:,  default: )


    def __getattr__(self.name):            # property 가 실제 존재하지 않으면 호출됨

        return self._get(name)



4. Decorator 패턴


def require_admin_context(f):


    def wrapper(*args, **kwargs):

        nova.context.require_admin_context(args[0])

        return f(*args, **kwargs)

    retrun wrapper


@require_admin_context

def service_get_by_compute_host(context, host):

    ...


* Class로 Decorator를 정의할려면 Class를 함수처럼 호출되게 __call()__ 멤버 함수를 정의


GoF 의 Decorator 와는 조금 다름.

GoF Decorator : 상속을 받으면서 Decorate 를 추가하는 방법 (객체지향적)

GoF 의 Template method pattern 과 오히려 더 유사 (하지만 이것 역시 객체지향적으로 좀 다름)


ApectJ 혹은 Spring AOP (Apspect Oriented Programming) 와 유사 (사용법은 python 이 더 간단)


P7YQ9-DJFPK-DTCYM-X6KW2-CGCGT
YTYK3-6JKXT-6HQRG-YRKKP-7RDMF

Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

[ network 구조 ]

eth0  : NAT (Public Network)

eth1  : Host-only (Private Management Network)

eth2  : Host-only (Private Data Network)


controller   : eth0 - 192.168.75.151    eth1 - 192.168.230.151

network     : eth0 - 192.168.75.152    eth1 - 192.168.230.152     eth2 - 192.168.200.152

Compute   : eth0 - 192.168.75.153    eth1 - 192.168.230.153     eth2 - 192.168.200.153


0. Kernel 버전

3.13.0-24-generic 에서 3.13.0.34-generic 으로 업그레이드 되어야 함


1. Host 이름 변경

$ sudo vi /etc/hostname

...

controller

...

$ sudo hostname -F /etc/hostname


$ sudo vi /etc/hosts

...

192.168.230.151 controller

192.168.230.152 network

192.168.230.153 compute


2. ntp 및 로컬타임 세팅

$ sudo apt-get install ntp

$ sudo vi /etc/ntp.conf

...

server time.bora.net

...

$ sudo ntpdate -u time.bora.net

$ sudo ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime

$ sudo service ntp restart


3. User 생성 및 sudo 세팅

# adduser stack

# visudo

...

stack   ALL=(ALL:ALL) NOPASSWD: ALL           # 맨 마지막 줄에 추가


4. ip forward 및 ip spoofing 세팅

sudo vi /etc/sysctl.conf

...

net.ipv4.conf.default.rp_filter=0         # ip spoofing 1 이 막는건가?

net.ipv4.conf.all.rp_filter=0               # ip spoofing 1 이 막는건가?

net.ipv4.ip_forward=1

...

$ sudo sysctl -p


5. 공통 패키지 설치

- 파이썬 pip 라이브러리

- 파이썬 개발 라이브러리

- 파이썬 eventlet 개발 라이브러리

- 파이썬 mysql 라이브러리

- vlan 및 bridge

- lvm                               (Cinder를 위해서)

- OpenVSwtich

- 파이썬 libvirt 라이브러리   (KVM 컨트롤 위해서)

- nbd 커널모듈 로드           (VM disk mount 를 위해서)

- ipset                          (ovs 성능 향상을 위해 ml2 에서 enable_ipset=True 일 때 사용)


sudo apt-get install python-pip

sudo apt-get install python-dev

$ sudo apt-get install libevent-dev

$ sudo apt-get install python-mysqldb

sudo apt-get install vlan bridge-utils

sudo apt-get install lvm2

$ sudo apt-get install openvswitch-switch

$ sudo apt-get install python-libvirt

$ sudo apt-get install nbd-client

$ sudo apt-get install ipset


$ sudo apt-get install python-tox            tox : nova.conf 를 generate 하기위한 툴

$ sudo apt-get install libmysqlclient-dev   tox 로 generate 할 때 mysql config 파일이 필요

$ sudo apt-get install libpq-dev              # tox 로 generate 할 때 pq config 파일이 필요

sudo apt-get install libxml2-dev           # tox 로 generate 할 때 xml parsing 필요

sudo apt-get install libxslt1-dev           tox 로 generate 할 때 xml parsing 필요

sudo apt-get install libvirt-dev             # tox 로 generate 할 때 필요

sudo apt-get install libffi-dev              # tox 로 generate 할 때 필요



[ 서버별 Process 및 Package ]


1. Controller Node 에 뜨는 Process

nova-api

nova-scheduler

nova-conductor

nova-consoleauth

nova-console

nova-novncproxy

nova-cert


neutron-server


2. Network Node Node 에 뜨는 Process

   metadata 서비스 : metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨

neutron-l3-agent

neutron-dhcp-agent

neutron-openvswitch-agent

neutron-metadata-agent          # metadata 서비스를 위해서 Network Node 에 필요

neutron-ns-metadata-proxy     # vm 이 network node 의 qdhcp 를 gateway로 보고 호출함 


3. Compute Node 에 뜨는 Process

nova-compute


neutron-l3-agent

neutron-openvswitch-agent



1. Controller Node 에 설치할 Neutron Package

neutron-server

neutron-plugin-ml2


2. Network Node 에 설치할 Neutron Package

neutron-plugin-ml2

neutron-plugin-openvswitch-agent

neutron-l3-agent   (DVR)

neutron-dhcp-agent


3. Compute Node 에 설치할 Neutron Package

neutron-common

neutron-plugin-ml2

neutron-plugin-openvswitch-agent

neutron-l3-agent   (DVR)



###############   controller   ######################


[ RabbitMQ 설치 ]

$ sudo  apt-get install rabbitmq-server

sudo rabbitmqctl change_password guest rabbit


[ MySQL 설치 ]

sudo apt-get install mysql-server python-mysqldb

$ sudo vi /etc/mysql/my.cnf

...

bind-address        = 0.0.0.0

...

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

character_set_filesystem = utf8

...

$ sudo service mysql restart


[ Keystone 설치 ]


1. Keystone package 설치

$ mkdir -p Git

$ cd Git

$ git clone http://git.openstack.org/openstack/keystone.git

$ cd keystone

$ git checkout -b 2014.2.1 tags/2014.2.1


sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .                        # source를 pip 으로 install 하기


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_dbpass';


3. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/keystone

$ sudo chown -R stack.stack /var/log/keystone

sudo mkdir -p /etc/keystone

$ sudo cp ~/Git/keystone/etc/* /etc/keystone/.


$ sudo vi /etc/logrotate.d/openstack

/var/log/keystone/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/nova/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/cinder/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/glance/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/neutron/*.log {

    daily

    rotate 31

    missingok

    dateext

}


4. conf 복사

sudo chown -R stack.stack /etc/keystone

$ cd /etc/keystone

mv keystone.conf.sample keystone.conf

$ mv logging.conf.sample logging.conf

$ mkdir -p ssl

$ cp -R ~/Git/keystone/examples/pki/certs /etc/keystone/ssl/.

$ cp -R ~/Git/keystone/examples/pki/private /etc/keystone/ssl/.


5. conf 설정

$ sudo vi keystone.conf


[DEFAULT]

admin_token=ADMIN

admin_workers=2

max_token_size=16384

debug=True

logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s% (message)s

logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s

rabbit_host=controller

rabbit_password=rabbit

log_file=keystone.log

log_dir=/var/log/keystone


[catalog]

 driver=keystone.catalog.backends.sql.Catalog


[database]

connection=mysql://keystone:keystone_dbpass@controller/keystone


[identity]

driver=keystone.identity.backends.sql.Identity


[paste_deploy]

config_file=/etc/keystone/keystone-paste.ini


[token]

expiration=7200

driver=keystone.token.persistence.backends.sql.Token


6. keystone 테이블 생성

keystone-manage db_sync


7. init script 등록

$ sudo vi /etc/init/keystone.conf


description "Keystone server"

author "somebody"


start on (filesystem and net-device-up IFACE!=lo)

stop on runlevel [016]


chdir /var/run


exec su -c "keystone-all" stack


$ sudo service keystone start


8. 초기 키스톤 명령을 위한 initrc 생성

$ vi initrc


export OS_SERVICE_TOKEN=ADMIN

export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0


9. tenant, user, role 등록

$ . initrc

keystone tenant-create --name=admin --description="Admin Tenant"

$ keystone tenant-create --name=service --description="Service Tenant"

$ keystone user-create --name=admin --pass=ADMIN --email=admin@example.com

$ keystone role-create --name=admin

$ keystone user-role-add --user=admin --tenant=admin --role=admin


10. Service 등록

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"


11. endpoint 등록

keystone endpoint-create --service=keystone --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0


12. adminrc 생성

unset OS_SERVICE_TOKEN

$ unset OS_SERVICE_ENDPOINT

$ vi adminrc

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN

export OS_TENANT_NAME=admin

export OS_AUTH_URL=http://controller:35357/v2.0


13. keystone conf 파일 리스트

stack@controller:/etc/keystone$ ll

total 104

drwxr-xr-x   3 stack stack  4096 Jan  7 15:53 ./

drwxr-xr-x 137 root  root  12288 Jan  7 17:23 ../

-rw-r--r--   1 stack stack  1504 Jan  7 11:16 default_catalog.templates

-rw-r--r--   1 stack stack 47749 Jan  7 11:51 keystone.conf

-rw-r--r--   1 stack stack  4112 Jan  7 11:16 keystone-paste.ini

-rw-r--r--   1 stack stack  1046 Jan  7 11:16 logging.conf

-rw-r--r--   1 stack stack  8051 Jan  7 11:16 policy.json

-rw-r--r--   1 stack stack 10676 Jan  7 11:16 policy.v3cloudsample.json

drwxrwxr-x   4 stack stack  4096 Jan  7 11:55 ssl/

stack@controller:/etc/keystone$ cd ssl

stack@controller:/etc/keystone/ssl$ ll

total 16

drwxrwxr-x 4 stack stack 4096 Jan  7 11:55 ./

drwxr-xr-x 3 stack stack 4096 Jan  7 15:53 ../

drwxrwxr-x 2 stack stack 4096 Jan  7 11:54 certs/

drwxrwxr-x 2 stack stack 4096 Jan  7 11:55 private/



[ Glance 설치 ]


1. Glance package 설치

git clone http://git.openstack.org/openstack/glance.git

$ cd glance

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

mysql -uroot -pmysql

mysql> CREATE DATABASE glance;

mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_dbpass';


3. service 등록

keystone user-create --name=glance --pass=glance_pass --email=glance@example.com

$ keystone user-role-add --user=glance --tenant=service --role=admin

$ keystone service-create --name=glance --type=image --description="Glance Image Service"

$ keystone endpoint-create --service=glance --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/glance

$ sudo chown -R stack.stack /var/log/glance

sudo mkdir -p /etc/glance

$ sudo cp ~/Git/glance/etc/glance-* /etc/glance/.

$ sudo cp ~/Git/glance/etc/*.json /etc/glance/.

$ sudo cp ~/Git/glance/etc/logging.cnf.sample /etc/glance/logging.cnf


$ sudo mkdir -p /var/lib/glance

sudo chown stack.stack /var/lib/glance

$ mkdir -p /var/lib/glance/images

$ mkdir -p /var/lib/glance/image-cache


5. conf owner 변경

sudo chown -R stack.stack /etc/glance


6. glance-api.conf 설정

$ vi /etc/glance/glance-api.conf


[DEFAULT]

verbose = True

debug = True

rabbit_host = controller

rabbit_password = rabbit

image_cache_dir = /var/lib/glance/image-cache/

delayed_delete = False

scrub_time = 43200

scrubber_datadir = /var/lib/glance/scrubber


[database]

connection = mysql://glance:glance_dbpass@controller/glance


[keystone_authtoken]

identity_uri = http://controller:35357

auth_uri = http://controller:5000/v2.0

admin_tenant_name = service

admin_user = glance

admin_password = glance_pass


[paste_deploy]

flavor=keystone


[glance_store]

filesystem_store_datadir = /var/lib/glance/images/


7. glance-registry.conf 설정

$ vi /etc/glance/glance-registry.conf


[DEFAULT]

verbose = False

debug = False

rabbit_host = controller

rabbit_password = rabbit


[database]

connection = mysql://glance:glance_dbpass@controller/glance


[keystone_authtoken]

identity_uri = http://controller:35357

auth_uri = http://controller:5000/v2.0

admin_tenant_name = service

admin_user = glance

admin_password = glance_pass


[paste_deploy]

flavor=keystone


8. glance 테이블 생성

glance-manage db_sync


9. init script 등록

$ sudo vi /etc/init/glance-api.conf


description "Glance API server"

author "Soren Hansen <soren@linux2go.dk>"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "glance-api" stack


$ sudo service glance-api start


$ sudo vi /etc/init/glance-registry.conf


description "Glance registry server"

author "Soren Hansen <soren@linux2go.dk>"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "glance-registry" stack


10. glance client package 설치

git clone http://git.openstack.org/openstack/python-glanceclient.git

$ cd python-glanceclient

$ git checkout -b 0.15.0 tags/0.15.0

$ sudo pip install -e .


11. Image 등록

wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

$ glance image-create --name cirros-0.3.3 --is-public true --container-format bare --disk-format qcow2 --file cirros-0.3.3-x86_64-disk.img


# Heat 이미지 등록 (from devstack/files)

$ glance image-create --name [Heat]F17-x86_64-cfntools --is-public true --container-format bare --disk-format qcow2 --file F17-x86_64-cfntools.qcow2


# Fedora 이미지 등록 (from devstack/files)

$ glance image-create --name Fedora-x86_64-20-20140618-sda --is-public true --container-format bare --disk-format qcow2 --file Fedora-x86_64-20-20140618-sda.qcow2


# mysql 이미지 등록 (from devstack/files

glance image-create --name mysql --is-public true --container-format bare --disk-format qcow2 --file mysql.qcow2



[ Cinder 설치 ]


1. Cinder package 설치

git clone http://git.openstack.org/openstack/cinder.git

$ cd cinder

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

mysql -uroot -pmysql

mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_dbpass';


3. service 등록

keystone user-create --name=cinder --pass=cinder_pass --email=cinder@example.com

keystone user-role-add --user=cinder --tenant=service --role=admin

$ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"

$ keystone endpoint-create --service=cinder --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s

$ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"

$ keystone endpoint-create --service=cinderv2 --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/cinder

$ sudo chown -R stack.stack /var/log/cinder

sudo mkdir -p /etc/cinder

$ sudo cp -R ~/Git/cinder/etc/cinder/* /etc/cinder/.


5. conf owner 변경

sudo chown -R stack.stack /etc/cinder

$ mv /etc/cinder/cinder.conf.sample /etc/cinder/cinder.conf

$ sudo chown root.root /etc/cinder/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/cinder/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/cinder

sudo chown stack.stack /var/lib/cinder

$ mkdir -p /var/lib/cinder/volumes

$ sudo mkdir -p /var/lock/cinder

sudo chown stack.stack /var/lock/cinder

sudo mkdir -p /var/run/cinder

sudo chown stack.stack /var/run/cinder


6. cinder.conf 설정

$ vi /etc/cinder/cinder.conf


[DEFAULT]

rpc_backend=cinder.openstack.common.rpc.impl_kombu

rabbit_host=controller

rabbit_password=rabbit

api_paste_config=api-paste.ini

state_path=/var/lib/cinder

glance_host=controller

lock_path=/var/lock/cinder

debug=True

verbose=True

rootwrap_config=/etc/cinder/rootwrap.conf

auth_strategy=keystone

volume_name_template=volume-%s

iscsi_helper=tgtadm

volumes_dir=$state_path/volumes

# volume_group=cinder-volumes               # volue-type 에 넣었으므로 제거


enabled_backends=lvm-iscsi-driver

default_volume_type=lvm-iscsi-type


[lvm-iscsi-driver]

volume_group=cinder-volumes

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

san_ip=controller

volume_backend_name=lvm-iscsi


[database]

connection = mysql://cinder:cinder_dbpass@controller/cinder


[keystone_authtoken]

auth_host=controller

auth_port=35357

auth_protocol=http

auth_uri=http://controller:5000

admin_user=cinder

admin_password=cinder_pass

admin_tenant_name=service


7. cinder 테이블 생성

cinder-manage db sync


8. volume 생성

$ mkdir -p ~/cinder-volumes

$ cd cinder-volumes

dd if=/dev/zero of=cinder-volumes-backing-file bs=1 count=0 seek=5G

$ sudo losetup /dev/loop1 /home/stack/cinder-volumes/cinder-volumes-backing-file

sudo fdisk /dev/loop1

n p 1 Enter Enter t 8e w

sudo pvcreate /dev/loop1

sudo vgcreate cinder-volumes /dev/loop1


9. init script 등록

$ sudo vi /etc/init/cinder-api.conf


description "Cinder api server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-api.log" stack


$ sudo service cinder-api start


$ sudo vi /etc/init/cinder-scheduler.conf


description "Cinder scheduler server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-scheduler.log" stack


$ sudo service cinder-scheduler start


$ sudo vi /etc/init/cinder-volume.conf


description "Cinder volume server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-volume.log" stack


$ sudo service cinder-volume start


10. volume type 등록

cinder type-create lvm-iscsi-type

stack@controller:~/cert$ cinder type-key lvm-iscsi-type set volume_backend_name=lvm-iscsi


11. volume 생성

cinder create --display-name test01 --volume-type lvm-iscsi-type 1




[ Nova Controller 설치 ]


1. Nova package 설치

$ git clone http://git.openstack.org/openstack/nova.git

$ cd nova

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


git clone https://github.com/kanaka/novnc.git

sudo cp -R novnc /usr/share/novnc


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';


3. service 등록

$ keystone user-create --name=nova --pass=nova_pass --email=nova@example.com

$ keystone user-role-add --user=nova --tenant=service --role=admin

$ keystone service-create --name=nova --type=compute --description="OpenStack Compute"

$ keystone endpoint-create --service=nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s


4. conf 파일 generation

$ cd ~/Git/nova

$ sudo tox -i http://xxx.xxx.xxx.xxx/pypi/web/simple -egenconfig          # pypi 서버 ip

$ sudo chown stack.stack /home/stack/Git/nova/etc/nova/nova.conf.sample


5. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/nova

$ sudo chown -R stack.stack /var/log/nova

$ sudo mkdir -p /etc/nova

$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.


6. conf owner 변경

$ sudo chown -R stack.stack /etc/nova

$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf

mv /etc/nova/logging_sample.conf logging.conf

$ sudo chown root.root /etc/nova/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/nova/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/nova

$ sudo chown stack.stack /var/lib/nova

$ sudo mkdir -p /var/lock/nova

$ sudo chown stack.stack /var/lock/nova

$ sudo mkdir -p /var/run/nova

$ sudo chown stack.stack /var/run/nova


7. nova conf 설정

$ vi /etc/nova/nova.conf


[DEFAULT]

rabbit_host=controller

rabbit_password=rabbit

rpc_backend=rabbit

my_ip=192.168.230.151

state_path=/var/lib/nova

rootwrap_config=/etc/nova/rootwrap.conf

api_paste_config=api-paste.ini

auth_strategy=keystone

allow_resize_to_same_host=true

network_api_class=nova.network.neutronv2.api.API

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

force_dhcp_release=true

security_group_api=neutron

lock_path=/var/lock/nova

debug=true

verbose=true

log_dir=/var/log/nova

compute_driver=libvirt.LibvirtDriver

firewall_driver=nova.virt.firewall.NoopFirewallDriver

vncserver_listen=192.168.230.151

vncserver_proxyclient_address=192.168.230.151


[cinder]

catalog_info=volume:cinder:publicURL


[database]

connection = mysql://nova:nova_dbpass@controller/nova


[glance]

host=controller


[keystone_authtoken]

auth_uri=http://controller:5000

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova_pass


[libvirt]

use_virtio_for_bridges=true

virt_type=kvm


[neutron]

service_metadata_proxy=True

metadata_proxy_shared_secret=openstack

url=http://192.168.230.151:9696

admin_username=neutron

admin_password=neutron_pass

admin_tenant_name=service

admin_auth_url=http://controller:5000/v2.0

auth_strategy=keystone


8. nova 테이블 생성

nova-manage db sync


9. init script 등록

$ sudo vi /etc/init/nova-api.conf


description "Nova api server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log" stack


$ sudo service nova-api start


$ sudo vi /etc/init/nova-scheduler.conf


description "Nova scheduler server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-scheduler --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack


$ sudo service nova-scheduler start


$ sudo vi /etc/init/nova-conductor.conf


description "Nova conductor server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack


$ sudo service nova-conductor start


$ sudo vi /etc/init/nova-consoleauth.conf


description "Nova consoleauth server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-consoleauth --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-consoleauth.log" stack


$ sudo service nova-consoleauth start



$ sudo vi /etc/init/nova-console.conf


description "Nova console server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-console --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-console.log" stack


$ sudo service nova-console start


$ sudo vi /etc/init/nova-cert.conf


description "Nova cert server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-cert --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-cert.log" stack


$ sudo service nova-cert start


$ sudo vi /etc/init/nova-novncproxy.conf


description "Nova novncproxy server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-novncproxy --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-novncproxy.log" stack


$ sudo service nova-novncproxy start



[ Neutron Controller 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE neutron;

mysql> GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_dbpass';


3. service 등록

keystone user-create --name=neutron --pass=neutron_pass --email=neutron@example.com

keystone service-create --name=neutron --type=network --description="OpenStack Networking"

keystone user-role-add --user=neutron --tenant=service --role=admin

keystone endpoint-create --service=neutron --publicurl http://controller:9696 --adminurl http://controller:9696  --internalurl http://controller:9696


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo mkdir -p /etc/neutron/plugins

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/plugins/ml2 /etc/neutron/plugins/.

$ sudo cp -R ~/Git/neutron/etc/neutron/rootwrap.d/ /etc/neutron/.


5. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


6. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

router_distributed = True

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = 86be..........       # 이름이 아니라 tenant-id를 넣어야 함

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


7. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]

local_ip = 192.168.200.151

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


8. Neutron 테이블 생성

$ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno


9. init script 등록

sudo vi /etc/init/neutron-server.conf


# vim:set ft=upstart ts=2 et:

description "Neutron API Server"

author "Chuck Short <zulcss@ubuntu.com>"


start on runlevel [2345]

stop on runlevel [!2345]


respawn


chdir /var/run


script

  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server

  exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \

    --config-file /etc/neutron/neutron.conf \

    --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \

    --log-file /var/log/neutron/neutron-server.log $CONF_ARG

end script


Neutron Server 수동 시작

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log


10. 명령어 확인

neutron ext-list


11. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash

sudo service neutron-server $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart



###############   Network   ######################


[ Neutron Network Node 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .


$ sudo apt-get install dnsmasq


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


4. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = service

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


5. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]

local_ip = 192.168.200.152

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


6. L3 agent conf 설정

$ vi /etc/neutron/l3_agent.ini


[DEFAULT]

debug = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

agent_mode = dvr_snat


7. DHCP agent conf 설정

vi /etc/neutron/dhcp_agent.ini


[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

enable_isolated_metadata = True

enable_metadata_network = True

dhcp_delete_namespaces = True

verbose = True


8. metadata agent conf 설정

$ vi /etc/neutron/metadata_agent.ini


[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne                      # RegionOne 으로 쓰면 에러

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass

nova_metadata_ip = controller

metadata_proxy_shared_secret = openstack

verbose = True


9. Bridge 및 port 생성

$ sudo ovs-vsctl add-br br-ex

$ sudo ovs-vsctl add-port br-ex eth0

$ sudo ovs-vsctl add-br br-tun

$ sudo ovs-vsctl add-port br-tun eth2


10. init script 등록

$ sudo vi /etc/init/neutron-openvswitch-agent.conf


description "Neutron OpenVSwitch Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack


$ sudo service neutron-openvswitch-agent start


$ sudo vi /etc/init/neutron-l3-agent.conf


description "Neutron L3 Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


pre-start script

  # Check to see if openvswitch plugin in use by checking

  # status of cleanup upstart configuration

  if status neutron-ovs-cleanup; then

    start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent

  fi

end script


exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack


$ sudo service neutron-l3-agent start


$ sudo vi /etc/init/neutron-dhcp-agent.conf


description "Neutron dhcp Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-dhcp-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/neutron-dhcp-agent.log" stack


$ sudo service neutron-dhcp-agent start


$ sudo vi /etc/init/neutron-metadata-agent.conf


description "Neutron metadata Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack


$ sudo service neutron-metadata-agent start


11. 설치 확인

$ neutron agent-list


12. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash


sudo service neutron-openvswitch-agent $1

sudo service neutron-dhcp-agent $1

sudo service neutron-metadata-agent $1

sudo service neutron-l3-agent $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart




###############   Compute   ######################


[ Nova Compute Node 설치 ]


1. Nova package 설치

$ git clone http://git.openstack.org/openstack/nova.git

$ cd nova

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/nova

$ sudo chown -R stack.stack /var/log/nova

$ sudo mkdir -p /etc/nova

$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/nova

$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf

$ mv /etc/nova/logging_sample.conf logging.conf

$ sudo chown root.root /etc/nova/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/nova/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/nova

$ sudo chown stack.stack /var/lib/nova

$ sudo mkdir -p /var/lib/nova/instances

$ sudo chown stack.stack /var/lib/nova/instances

$ sudo mkdir -p /var/lock/nova

$ sudo chown stack.stack /var/lock/nova

$ sudo mkdir -p /var/run/nova

$ sudo chown stack.stack /var/run/nova


nova.conf, logging.conf 복사 (Controller node 에서 수행)

scp /etc/nova/logging.conf nova.conf stack@compute:/etc/nova/.


4. nova conf 설정

$ vi /etc/nova/nova.conf


[DEFAULT]

rabbit_host=controller

rabbit_password=rabbit

rpc_backend=rabbit

my_ip=192.168.230.153

state_path=/var/lib/nova

rootwrap_config=/etc/nova/rootwrap.conf

api_paste_config=api-paste.ini

auth_strategy=keystone

allow_resize_to_same_host=true

network_api_class=nova.network.neutronv2.api.API

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

force_dhcp_release=true

security_group_api=neutron

lock_path=/var/lock/nova

debug=true

verbose=true

log_dir=/var/log/nova

compute_driver=libvirt.LibvirtDriver

firewall_driver=nova.virt.firewall.NoopFirewallDriver

novncproxy_base_url=http://controller:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=controller


[cinder]

catalog_info=volume:cinder:publicURL


[database]

connection = mysql://nova:nova_dbpass@controller/nova


[glance]

host=controller


[keystone_authtoken]

auth_uri=http://controller:5000

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova_pass


[libvirt]

use_virtio_for_bridges=true

virt_type=kvm


[neutron]

metadata_proxy_shared_secret=openstack

url=http://192.168.230.151:9696

admin_username=neutron

admin_password=neutron_pass

admin_tenant_name=service

admin_auth_url=http://controller:5000/v2.0

auth_strategy=keystone


5. init script 등록

$ sudo vi /etc/init/nova-compute.conf


description "Nova compute server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-compute --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-compute.log" stack


$ sudo service nova-compute start



[ Neutron Compute 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


etc 파일을 복사 (Network Node 로 부터)

scp /etc/neutron/* stack@compute:/etc/neutron/.

$ scp /etc/neutron/plugins/ml2/ml2_conf.ini stack@compute:/etc/neutron/plugins/ml2/.


4. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = service

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


5. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]                                          # Compute Node 에 추가

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]                                             #Compute Node 에 추가

local_ip = 192.168.200.153

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


6. L3 agent conf 설정

$ vi /etc/neutron/l3_agent.ini


[DEFAULT]

debug = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

agent_mode = dvr


7. DHCP agent conf 설정

vi /etc/neutron/dhcp_agent.ini


[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

enable_isolated_metadata = True

enable_metadata_network = True

dhcp_delete_namespaces = True

verbose = True


8. metadata agent conf 설정

$ vi /etc/neutron/metadata_agent.ini


[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne                    # RegionOne 으로 쓰면 에러

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass

nova_metadata_ip = controller

metadata_proxy_shared_secret = openstack

verbose = True


$ keystone endpoint-list                    # Region 을 확인 후에 설정


9. Bridge 및 port 생성

$ sudo ovs-vsctl add-br br-ex

$ sudo ovs-vsctl add-port br-ex eth0

$ sudo ovs-vsctl add-br br-tun

$ sudo ovs-vsctl add-port br-tun eth2


10. init script 등록

$ sudo vi /etc/init/neutron-openvswitch-agent.conf


description "Neutron OpenVSwitch Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack


$ sudo service neutron-openvswitch-agent start


$ sudo vi /etc/init/neutron-l3-agent.conf


description "Neutron L3 Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


pre-start script

  # Check to see if openvswitch plugin in use by checking

  # status of cleanup upstart configuration

  if status neutron-ovs-cleanup; then

    start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent

  fi

end script


exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack


$ sudo service neutron-l3-agent start


$ sudo vi /etc/init/neutron-metadata-agent.conf


description "Neutron metadata Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack


$ sudo service neutron-metadata-agent start


11. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash


sudo service neutron-openvswitch-agent $1

sudo service neutron-metadata-agent $1

sudo service neutron-l3-agent $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart




External Network 생성

$ neutron net-create ext-net --router:external True --provider:physical_network external --provider:network_type flat


neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.75.193,end=192.168.75.254 --disable-dhcp --gateway 192.168.75.2 192.168.75.0/24


Internal Network 생성

neutron net-create demo-net --provider:network_type vxlan 

neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.1/24


Router 생성

$ neutron router-create demo-router

$ neutron router-interface-add demo-router demo-subnet

$ neutron router-gateway-set demo-router ext-net


Security rule 등록

$ neutron security-group-rule-create --protocol icmp --direction ingress default

$ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default


MTU 세팅

1. /etc/network/interfaces 파일에 세팅

auto eth2

iface eth2 inet static

address 192.168.200.152

netmask 255.255.255.0

mtu 9000


$ sudo ifdown eth2

$ sudo ifup eth2


2. 동적으로 세팅 (리부팅 필요)

$ ifconfig eth2 mtu 9000

$ reboot


route gw 등록

$ sudo route add -net "0.0.0.0/0" gw "10.0.0.1"


VM 생성

$ nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e


floating ip 추가

$ neutron floatingip-create ext-net

$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]


metadata 호출

cirros vm 안에서

$ wget http://169.254.169.254/latest/meta-data/instance-id


Controller 노드에서 metadata 바로 호출하기

$ curl \

  -H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \

  -H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \

  -H 'x-instance-id-signature: \

       80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \

  http://192.168.230.230:8775/latest/meta-data


[ x-instance-id-signature 구하기 ]

>>> import hmac

>>> import hashlib

>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()

'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'

>>>


코멘트 제외하고 설정정보 보기

cat /etc/nova/nova.conf  | grep -v ^# | grep -v ^$


vxlan 연결 여부 확인

$ sudo ovs-ofctl show br-tun


net 을 제거하는 순서

1. router 와 subnet 의 인터페이스 제거

$ neutron router-interface-delete [router-id] [subnet-id]


2. subnet 삭제

$ neutron subnet-delete [subnet-id]


3. net 삭제

$ neutron net-delete [net-id]


pip 으로 설치할 수 있게 배포판 만들기

$ sudo python setup.py sdist --formats=gztar

























Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

1. Network A -> Network A

PREROUTING(nat:dnat) -> INPUT(filter) -> OUTPUT(nat:dnat) -> OUTPUT(filter->POSTROUTING(nat:snat)


2. Network A -> Network B

PREROUTING(nat:dnat) -> FORWARD(filter) -> POSTROUTING(nat:snat)


3. Nova Instance 생성 후 iptables nat

PREROUTING ACCEPT

    nova-network-PREROUTING

        -> VM DNAT 변환

    nova-compute-PREROUTING

    nova-api-metadat-PREROUTING

INPUT ACCEPT

OUTPUT ACCEPT

    nova-network-OUTPUT

        -> VM DNAT 변환

    nova-compute-OUTPUT

    nova-api-metadat-OUTPUT

POSTROUTING ACCEPT

    nova-network-POSTROUTING

    nova-compute-POSTROUTING

    nova-api-metadat-POSTROUTING

    nova-postrouting-bottom

        nova-network-snat

            nova-network-float-snat

                -> VM SNAT 변환

            

            -> Host SNAT 변환

        nova-compute-snat

            nova-compute-float-snat

        nova-api-metadat-snat

            nova-api-metadat-float-snat


4. Nova Instance 생성 후 iptables filter

INPUT ACCEPT

    nova-compute-INPUT

    nova-network-INPUT

        - dhcp 열기 (bridge 단위)

    nova-api-metadat-INPUT

        - nova metadata api 포트 8775 승인

FORWARD ACCEPT

    nova-filter-top

        nova-compute-local

            - nova-compute-inst-732 (인스턴스별 생성)

                nova-compute-provider

                - Secutiry rules 입력

                nova-compute-sg-fallback

                    - 모든 패킷 drop

        nova-network-local

        nova-api-metadat-local

    nova-compute-FORWARD

    nova-network-FORWARD

        - bridge 별 in/out 패킷 승인

    nova-api-metadat-FORWARD

OUTPUT ACCEPT

    nova-filter-top

        nova-compute-local

            - nova-compute-inst-732 (인스턴스별 생성)

                nova-compute-provider

                - Secutiry rules 입력

                nova-compute-sg-fallback

                    - 모든 패킷 drop

        nova-network-local

        nova-api-metadat-local

    nova-compute-OUTPUT

    nova-network-OUTPUT

    nova-api-metadat-OUTPUT





Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

[ Controller Install ]


1. controller node install (nova, mysql, rabbitmq keystone, glance, cinder, horizon)

$ sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient


$ sudo apt-get install mysql-server-5.5


$ sudo apt-get install rabbitmq-server


$ sudo apt-get install keystone python-keystoneclient


$ sudo apt-get install glance python-glanceclient


$ sudo apt-get install cinder-api cinder-scheduler cinder-volume


$ apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard


2. database configuration (nova, glance, cinder, keystone)

$ sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

$ sudo vi /etc/mysql/my.cnf

[mysqld] 

# 추가

skip-host-cache 
skip-name-resolve 


$ sudo service mysql restart


$ mysql -u root -p

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhostIDENTIFIED BY 'NOVA_DBPASS';

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%IDENTIFIED BY 'NOVA_DBPASS';


mysql> CREATE DATABASE glance;

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhostIDENTIFIED BY 'GLANCE_DBPASS';

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%IDENTIFIED BY 'GLANCE_DBPASS';


mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhostIDENTIFIED BY 'CINDER_DBPASS';

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%IDENTIFIED BY 'CINDER_DBPASS';


mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \

           IDENTIFIED BY 'KEYSTONE_DBPASS';

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

           IDENTIFIED BY 'KEYSTONE_DBPASS';


sudo vi /etc/hosts.allow

ALL:192.168.0.0/255.255.0.0

mysqld:ALL


3. keystone setting

$ sudo rm /var/lib/keystone/keystone.db

$ sudo vi /etc/keystone/keystone.conf

connection = mysql://keystone:KEYSTONE_DBPASS@localhost/keystone

token_format = UUID


$ sudo keystone-manage db_sync

$ sudo service keystone restart


$ vi keystone_basic.sh

#!/bin/sh

#

# Keystone basic configuration 


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#

HOST_IP=192.168.75.131

ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin_pass}

SERVICE_PASSWORD=${SERVICE_PASSWORD:-service_pass}

export SERVICE_TOKEN="ADMIN"

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"

SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}


get_id () {

    echo `$@ | awk '/ id / { print $4 }'`

}


# Tenants

ADMIN_TENANT=$(get_id keystone tenant-create --name=admin)

SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME)



# Users

ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)



# Roles

ADMIN_ROLE=$(get_id keystone role-create --name=admin)

KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin)

KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin)


# Add Roles to Users in Tenants

keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT


# The Member role is used by Horizon and Swift

MEMBER_ROLE=$(get_id keystone role-create --name=Member)


# Configure service users/roles

NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE


GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE


QUANTUM_USER=$(get_id keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE


CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE


$ vi keystone_endpoints_basic.sh

#!/bin/sh

#

# Keystone basic Endpoints


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#


# Host address

HOST_IP=192.168.75.131

EXT_HOST_IP=192.168.75.131

VOLUME_HOST_IP=192.168.75.131

VOLUME_EXT_HOST_IP=192.168.75.131

NETWORK_HOST_IP=192.168.75.131

NETWORK_EXT_HOST_IP=192.168.75.131


# MySQL definitions

MYSQL_USER=keystone

MYSQL_DATABASE=keystone

MYSQL_HOST=$HOST_IP

MYSQL_PASSWORD=KEYSTONE_DBPASS


# Keystone definitions

KEYSTONE_REGION=RegionOne

export SERVICE_TOKEN=ADMIN

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"


while getopts "u:D:p:m:K:R:E:T:vh" opt; do

  case $opt in

    u)

      MYSQL_USER=$OPTARG

      ;;

    D)

      MYSQL_DATABASE=$OPTARG

      ;;

    p)

      MYSQL_PASSWORD=$OPTARG

      ;;

    m)

      MYSQL_HOST=$OPTARG

      ;;

    K)

      MASTER=$OPTARG

      ;;

    R)

      KEYSTONE_REGION=$OPTARG

      ;;

    E)

      export SERVICE_ENDPOINT=$OPTARG

      ;;

    T)

      export SERVICE_TOKEN=$OPTARG

      ;;

    v)

      set -x

      ;;

    h)

      cat <<EOF

Usage: $0 [-m mysql_hostname] [-u mysql_username] [-D mysql_database] [-p mysql_password]

       [-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url ] 

       [ -T keystone_token ]

          

Add -v for verbose mode, -h to display this message.

EOF

      exit 0

      ;;

    \?)

      echo "Unknown option -$OPTARG" >&2

      exit 1

      ;;

    :)

      echo "Option -$OPTARG requires an argument" >&2

      exit 1

      ;;

  esac

done  


if [ -z "$KEYSTONE_REGION" ]; then

  echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_TOKEN" ]; then

  echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_ENDPOINT" ]; then

  echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2

  missing_args="true"

fi


if [ -z "$MYSQL_PASSWORD" ]; then

  echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2

  missing_args="true"

fi


if [ -n "$missing_args" ]; then

  exit 1

fi

 

keystone service-create --name nova --type compute --description 'OpenStack Compute Service'

keystone service-create --name cinder --type volume --description 'OpenStack Volume Service'

keystone service-create --name glance --type image --description 'OpenStack Image Service'

keystone service-create --name keystone --type identity --description 'OpenStack Identity'

keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service'

keystone service-create --name quantum --type network --description 'OpenStack Networking service'


create_endpoint () {

  case $1 in

    compute)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s'

    ;;

    volume)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$VOLUME_EXT_HOST_IP"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s'

    ;;

    image)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':9292/v2' --adminurl 'http://'"$HOST_IP"':9292/v2' --internalurl 'http://'"$HOST_IP"':9292/v2'

    ;;

    identity)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':5000/v2.0' --adminurl 'http://'"$HOST_IP"':35357/v2.0' --internalurl 'http://'"$HOST_IP"':5000/v2.0'

    ;;

    ec2)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8773/services/Cloud' --adminurl 'http://'"$HOST_IP"':8773/services/Admin' --internalurl 'http://'"$HOST_IP"':8773/services/Cloud'

    ;;

    network)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$NETWORK_EXT_HOST_IP"':9696/' --adminurl 'http://'"$NETWORK_HOST_IP"':9696/' --internalurl 'http://'"$NETWORK_HOST_IP"':9696/'

    ;;

  esac

}


for i in compute volume image object-store identity ec2 network; do

  id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type='"$i"';"` || exit 1

  create_endpoint $i $id

done


$ vi admin.rc

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin_pass

export OS_AUTH_URL="http://192.168.75.131:5000/v2.0/"


$ keystone tenant-create --name DEV --enabled true

$ keystone user-create --name dev_admin --tenant 5e795212d0804ad89234d9a1ac30c8ca --pass adminPass --enabled true

$ keystone user-create --name dev_user01 --tenant 5e795212d0804ad89234d9a1ac30c8ca --pass userPass --enabled true


# Admin role 과 dev_admin 을 연결

$ keystone user-role-add --user c207c127ba7c46d2bf18f6c39ac4ff78 --role 19f87df854914a1a903972f70d7d631a --tenant 5e795212d0804ad89234d9a1ac30c8ca


# Member role 과 dev_user01 을 연결

keystone user-role-add --user 908c6c5691374d6a95b64fea0e1615ce --role b13ffb470d1040d298e08cf9f5a6003a --tenant 5e795212d0804ad89234d9a1ac30c8ca



$ vi dev_admin.rc

export OS_USERNAME=dev_admin

export OS_PASSWORD=adminPass

export OS_TENANT_NAME=DEV

export OS_AUTH_URL="http://192.168.75.131:5000/v2.0/"


$ vi dev_user.rc

export OS_USERNAME=dev_user01

export OS_PASSWORD=userPass

export OS_TENANT_NAME=DEV

export OS_AUTH_URL="http://192.168.75.131:5000/v2.0/"


4. nova settting

$ sudo vi /etc/nova/nova.conf


dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

logdir=/var/log/nova 

state_path=/var/lib/nova 

lock_path=/var/lock/nova 

force_dhcp_release=True 

libvirt_use_virtio_for_bridges=True 

connection_type=libvirt 

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf 

verbose=True 

debug=True 

ec2_private_dns_show_ip=True 

api_paste_config=/etc/nova/api-paste.ini 

enabled_apis=ec2,osapi_compute,metadata 

cinder_catalog_info=volume:cinder:adminURL

use_network_dns_servers=True

metadata_host=192.168.75.131

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_manager=nova.api.manager.MetadataManager

metadata_port=8775

vncserver_proxyclient_address=192.168.230.131

vncserver_listen=0.0.0.0

vnc_enabled=true

xvpvncproxy_base_url=http://192.168.230.131:6081/console

novncproxy_base_url=http://192.168.230.131:6080/vnc_auto.html

remove_unused_base_images=False

image_create_to_qcow2 = True

api_rate_limit=True


#rpc setting 

rpc_backend = rabbit 

rabbit_host = 192.168.230.131


#network setting 

network_api_class = nova.network.api.API 

security_group_api = nova


# Network settings 

dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

network_manager=nova.network.manager.VlanManager 

network_api_class=nova.network.api.API 

dhcp_lease_time=600 

vlan_start=1001 

fixed_range=10.0.0.0/16 

allow_same_net_traffic=False 

multi_host=True 

send_arp_for_ha=True 

#share_dhcp_address=True 

force_dhcp_release=True 

flat_interface = eth1

public_interface=eth0


#auth setting 

use_deprecated_auth = false

auth_strategy = keystone


#image setting 

glance_api_services = 192.168.75.131:9292 

image_service = nova.image.glance.GlanceImageService 

glance_host = 192.168.230.131


[database] 

connection = mysql://nova:NOVA_DBPASS@localhost/nova

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131 

auth_port = 35357

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin 

admin_password = admin_pass


$ sudo nova-manage db sync

$ sudo service nova-api restart

$ sudo service nova-cert restart

$ sudo service nova-consoleauth restart

$ sudo service nova-scheduler restart

$ sudo service nova-conductor restart

$ sudo service nova-novncproxy restart


5. glance setting

$ sudo vi /etc/glance/glance-api.conf


# 아래 코멘트 처리

qpid, swift_store, s3_store, sheepdog_store


rabbit_host = 192.168.230.131

rabbit_port = 5672 

rabbit_use_ssl = false 

rabbit_virtual_host = / 

rabbit_notification_exchange = glance

rabbit_notification_topic = notifications 

rabbit_durable_queues = False

 

[database]

connection = mysql://glance:GLANCE_DBPASS@192.168.230.131/glance

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131 

auth_port = 35357 

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin

admin_password = admin_pass


[paste_deploy]

flavor=keystone


$ sudo vi /etc/glance/glance-registry.conf


[database]

connection = mysql://glance:GLANCE_DBPASS@192.168.230.131/glance

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131

auth_port = 35357

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin

admin_password = admin_pass


[paste_deploy]

flavor=keystone


$ mysql -u root -p

mysql> use glance;

mysql> alter table migrate_version convert to character set utf8 collate utf8_unicode_ci;

mysql> flush privileges;


$ sudo glance-manage db_sync

$ sudo service glance-api restart

$ sudo service glance-registry restart


$ glance image-create --name ubuntu-14.04-cloudimg --disk-format qcow2 --container-format bare --owner e07a35f02d9e4281b8336d9112faed51 --file ubuntu-14.04-server-cloudimg-amd64-disk1.img --is-public True --progress


$ wget --no-check-certificate https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

$ glance image-create --name cirros-0.3.0 --disk-format qcow2 --container-format bare --owner e07a35f02d9e4281b8336d9112faed51 --file cirros-0.3.0-x86_64-disk.img --is-public True --progress


6. cinder setting

$ sudo cinder-manage db sync

$ sudo vi /etc/cinder/cinder.conf


[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = volume-sfpoc-%s

volume_group = cinder-volumes

verbose = True

debug=True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes


default_availability_zone=LH_ZONE

storage_availability_zone=LH_ZONE


rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = 192.168.75.131

rabbit_port = 5672


glance_host=192.168.230.131

glance_port=9292

glance_api_servers=$glance_host:$glance_port


default_volume_type=LOW_END


# multi backend

enabled_backends=LEFTHAND,SOLIDFIRE

[LEFTHAND]

volume_name_template = volume-sfpoc-%s

volume_group = cinder-volumes

volume_driver=cinder.volume.drivers.san.hp.hp_lefthand_iscsi.HPLeftHandISCSIDriver

volume_backend_name=ISCSI_LH

san_ip=192.168.230.141

san_login=admin

san_password=admin_pass

san_clustername=CLUSTER-LEFTHAND

san_ssh_port=16022


[SOLIDFIRE]

volume_name_template = volume-sfpoc-%s

volume_group = cinder-volumes

verbose = True

volume_driver=cinder.volume.drivers.solidfire.SolidFireDriver

volume_backend_name=ISCSI_SF

san_ip=192.168.230.151

san_login=admin

san_password=admin_pass



[database]

connection=mysql://cinder:cinderPass@192.168.75.131/cinder


[keystone_authtoken]

auth_uri = http://192.168.75.131:5000

auth_host = 192.168.75.131

auth_port = 35357

auth_protocol = http

admin_tenant_name = admin

admin_user = admin

admin_password = admin_pass


$ sudo cinder-manage db sync

$ sudo service cinder-api restart

$ sudo service cinder-volume restart

$ sudo service cinder-scheduler restart


7. LeftHand Cluster 정보 보기

$ ssh -p 16022 user@192.168.230.140

CLIQ> getclusterinfo searchdepth=1 verbose=0

CLIQ> getserverinfo servername=ubuntu

CLIQ> getvolumeinfo volumename=volume-sfpoc-9d36737a-d332-4613-bce2-32465904a6fc


8. multi backend 세팅

$ cinder type-create LOW_END

$ cinder type-key LOW_END set volume_backend_name=ISCSI_LH

$ cinder type-create HIGH_END

$ cinder type-key HIGH_END set volume_backend_name=ISCSI_SF


# 1G High-end 볼륨 생성

$ cinder create --display-name high-test-01 --volume-type HIGH_END 1


9. backend qos 세팅

$ cinder type-create IOPS_3000

$ cinder type-key IOPS_3000 set volume_backend_name=ISCSI_SF

$ cinder qos-create QOS_IOPS_3000 consumer="back-end" minIOPS=3000 maxIOPS=3000 burstIOPS=3000

$ cinder qos-associate 1e9694b8-eca4-4ce7-b476-d1637535aaa2 9c241c66-30fd-442b-b7a1-79b4f1892919

$ cinder qos-get-association 1e9694b8-eca4-4ce7-b476-d1637535aaa2



[ Compute Node Install ]


1. compute node install (nova-compute, nova-network, nova-api-metadata)

$ sudo apt-get install nova-compute-kvm nova-network nova-api-metadata





[ 기본 설정 ]


1. network setting

$ nova network-create --fixed-range-v4 10.0.0.0/24 --vlan 1001 --gateway 10.0.0.1 --bridge br1001 --bridge-interface eth0 --multi-host T --dns1 8.8.8.8 --dns2 8.8.4.4 --project-id 5e795212d0804ad89234d9a1ac30c8ca dev_network


2. fixed ip reserve

$ nova fixed-ip-reserve 10.0.0.3

$ nova fixed-ip-reserve 10.0.0.4

$ nova fixed-ip-reserve 10.0.0.5


3. floating ip create

$ nova floating-ip-bulk-create 192.168.75.128/25 --interface eth0


4. secgroup 생성

$ nova secgroup-create connect 'icmp and ssh'

$ nova secgroup-add-rule connect icmp -1 -1 0.0.0.0/0

$ nova secgroup-add-rule connect tcp 22 22 0.0.0.0/0


5. keypair 생성

$ nova keypair-add stephen >> stephen.pem


6. pem 파일을 다른 호스트에 복사

$ scp -P 22 dev_admin.pem stack@192.168.230.132:~/creds/.

$ chmod 600 dev_admin.pem


7. nova.conf 를 다른 멀티호스트에 복사

$ for i in `seq 132 134`; do scp nova.conf stack@192.168.230.$i:~/creds/.; done


8. zone 설정

$ nova aggregate-create POC LH_ZONE

$ nova aggregate-add-host POC ubuntu


9. VM 생성

$ nova boot test01 --flavor 1 --image 4399bba0-17a4-43ef-8fdd-4edd9c2afe74 --key_name dev_admin --security_group connect


# boot on volume 및 attach volume 을 동시에 실행

$ nova boot [name] --flavor [flavorid] 

  --block-device id=[imageid],source=image,dest=volume,size=10,bootindex=0,shutdown=remove

  --block-device id=[volumeid],source=volume,dest=volume,size=100,bootindex=1


10. VM 접속

$ ssh -i dev_admin.pem cirros@10.0.0.6

$ ssh -i dev_admin.pem ubuntu@10.0.0.6




[ VMware 관련 설정 ]


1. cinder.conf

[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = %s

volume_group = cinder-volumes

verbose = True

debug=True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes


default_availability_zone=VMWARE_ZONE

storage_availability_zone=VMWARE_ZONE


rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = 192.168.75.131

rabbit_port = 5672


glance_host=192.168.75.131

glance_port=9292

glance_api_servers=$glance_host:$glance_port


default_volume_type=VMWARE_TYPE


# multi backend

enabled_backends=VMWARE_DRIVER


[VMWARE_DRIVER]

volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareEsxVmdkDriver

volume_backend_name=VMWARE

vmware_host_ip = 192.168.75.131

vmware_host_password = VMWARE_PASSWORD

vmware_host_username = root


[database]

connection=mysql://cinder:cinderPass@192.168.75.131/cinder


[keystone_authtoken]

auth_uri = http://192.168.75.131:5000

auth_host = 192.168.75.131

auth_port = 35357

auth_protocol = http

admin_tenant_name = admin

admin_user = admin

admin_password = admin_pass


2. multi backend 세팅

$ cinder type-create VMWARE_TYPE

$ cinder type-key VMWARE_TYPE set volume_backend_name=VMWARE


# 1G High-end 볼륨 생성

$ cinder create --display-name test-01 --volume-type VMWARE_TYPE 1


3. nova.conf 

$ sudo vi /etc/nova/nova.conf


dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

logdir=/var/log/nova 

state_path=/var/lib/nova 

lock_path=/var/lock/nova 

force_dhcp_release=True 

# libvirt_use_virtio_for_bridges=True 

# connection_type=libvirt 

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf 

verbose=True 

debug=True 

ec2_private_dns_show_ip=True 

api_paste_config=/etc/nova/api-paste.ini 

enabled_apis=ec2,osapi_compute,metadata 

cinder_catalog_info=volume:cinder:adminURL

use_network_dns_servers=True

metadata_host=192.168.75.131

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_manager=nova.api.manager.MetadataManager

metadata_port=8775

vncserver_proxyclient_address=192.168.230.131

vncserver_listen=0.0.0.0

vnc_enabled=true

xvpvncproxy_base_url=http://192.168.230.131:6081/console

novncproxy_base_url=http://192.168.230.131:6080/vnc_auto.html

compute_driver = vmwareapi.VMwareVCDriver

remove_unused_base_images=False

image_create_to_qcow2 = True

api_rate_limit=True


#rpc setting 

rpc_backend = rabbit 

rabbit_host = 192.168.230.131


#network setting 

network_api_class = nova.network.api.API 

security_group_api = nova


# Network settings 

dhcpbridge_flagfile=/etc/nova/nova.conf 

dhcpbridge=/usr/bin/nova-dhcpbridge 

network_manager=nova.network.manager.VlanManager 

network_api_class=nova.network.api.API 

dhcp_lease_time=600 

vlan_start=1001 

fixed_range=10.0.0.0/16 

allow_same_net_traffic=False 

multi_host=True 

send_arp_for_ha=True 

#share_dhcp_address=True 

force_dhcp_release=True 

flat_interface = eth0

public_interface=eth0


#auth setting 

use_deprecated_auth = false

auth_strategy = keystone


#image setting 

glance_api_services = 192.168.75.131:9292 

image_service = nova.image.glance.GlanceImageService 

glance_host = 192.168.230.131


[vmware]

host_ip = 192.168.75.131

host_username = root

host_password = VMWARE_PASSWORD

cluster_name = cluster1

use_linked_clone = False


[database] 

connection = mysql://nova:NOVA_DBPASS@localhost/nova

 

[keystone_authtoken] 

auth_uri = http://192.168.75.131:5000 

auth_host = 192.168.75.131 

auth_port = 35357

auth_protocol = http 

admin_tenant_name = admin 

admin_user = admin 

admin_password = admin_pass


4. nova-compute.conf

#[DEFAULT]

#compute_driver=libvirt.LibvirtDriver

#[libvirt]

#virt_type=kvm


5. zone 설정

$ nova aggregate-create VMWARE VMWARE_ZONE

$ nova aggregate-add-host VMWARE controller


6. image 등록

[ slitaz linux ]

wget http://partnerweb.vmware.com/programs/vmdkimage/trend-tinyvm1-flat.vmdk

$ glance image-create --name [vmware]trend-static-thin --file trend-tinyvm1-flat.vmdk --is-public=True --container-format=bare --disk-format=vmdk --property vmware_disktype="thin" --property vmware_adaptertype="ide"


[ slitaz linux 접속 및 dhcp 변경]

vmware / vmware  접속 후 root 권한 획득   root / root


# vi /etc/network.conf

DHCP="yes"

STATIC="no"


[ cirros ]

wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

$ qemu-img convert -f qcow2 -O vmdk cirros-0.3.3-x86_64-disk.img cirros-0.3.3-x86_64-disk.vmdk

$ glance image-create --name [vmware]cirros-0.3.3 --disk-format vmdk --container-format bare --file cirros-0.3.3-x86_64-disk.vmdk --property vmware-disktype="sparse" --property hw_vif_model="VirtualVmxnet" --property vmware_adaptertype="ide" --is-public True --progress


7. vm -> image 저장

1. ESXi 호스트 접속

2. vm위치로 이동

# cd /vmfs/volumes/datastore1/6c516279-c83f-43ec-a8d4-bec540604280

3. thin copy

# vmkfstools -i 6c516279-c83f-43ec-a8d4-bec540604280.vmdk -d thin .

./vmware_temp/trend-tinyvm1-dhcp-thin.vmdk

4. 다른 host 에서 scp 로 가져옴

$ scp root@192.168.75.182:/vmfs/volumes/542cf526-bef9f829-2f02-000c29fef6ec/vmware_temp/trend-tinyvm1-dhcp-thin-flat.vmdk .


8. nova boot

$ nova hypervisor-list

$ nova boot test01 --flavor 1 --image 6d9745dc-0fc9-4802-b21d-329004353406 --key_name stephen --availability-zone "VMWARE_ZONE::domain-c12(cluster1)"










Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

1. compute host 간의 libvirt 버전이 동일해야 한다.

2. "libvirtd -d -l" 옵션으로 떠 있어야 한다.


# vi /etc/libvirt/libvirtd.conf

listen_tls = 0

listen_tcp = 1

auth_tcp = "none"


# vi /etc/init/libvirt-bin.conf

env libvirtd_opts="-d -l"


# vi /etc/default/libvirt-bin

libvirtd_opts=" -d -l"


sudo service libvirt-bin restart


3. nova.conf 의 "send_arp_for_ha" flag가 True로 셋팅되어야 함


# vi /etc/nova/nova.conf

send_arp_for_ha=True

#force_config_drive = always

block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE


Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요