[ 사전 준비 ]

openstack.org 에 open id 로 접속

review.openstack.org 의 settings 에서 등록해야 할 내용

1. Profile >> Username

      - gerrit review 시에 등록할 username 값

    Profile >> Full Name, Email Address

      - git config 에 등록할 user.name, user.email 값

2. SSH Public Keys

    -> ssh public keys 를 등록

3. Agreements

    -> Contributor Agreement 등록


# clone a project (nova)
git clone git://github.com/openstack/nova.git

-> DevStack 으로 설치했다면 /opt/stack/nova 에 git 으로 clone 되어 있음

# port 확인 (브라우저에서 확인)
https://review.openstack.org/ssh_info

# Testing Gerrit Connections
ssh -p 29418 StephenAhn@review.openstack.org

# Setting username
git config --global --add gitreview.username "StephenAhn"

# gerrit 단축 경로 저장
vi ~/.ssh/config
Host review
  Hostname review.openstack.org
  Port 29418
  User StephenAhn

# gerrit remote 확인 및 change-id 자동 세팅 설정
git review -s

# gerrit remote 확인 시 에러나면 remote 추가
git remote add gerrit ssh://StephenAhn@review.openstack.org:29418/openstack/nova.git

# 최신 소스로 다운받기 (DevStack 의 경우 /opt/stack/nova 디렉토리에서)
git remote update
git checkout master
git pull --ff-only origin master

# blueprint 채널 생성 (blueprint 명이 local-storage-volume-scheduling 임)
git checkout -b bp/local-storage-volume-scheduling

# .mailmap 에 본인 email 추가 (여러 개의 이메일도 가능)
vi .mailmap
    <skanddh@gmail.com> <xxx@xxx.com>

# commit message 입력
git commit --amend

첫번째 라인은 50자 이내로 간단히 요약을 쓴다.
[공백라인]
설명을 적되 라인은 72자가 넘어가면 다음 라인에 쓴다.
.....

Add volume retype to Cinder client.
Cinder code: https://review.openstack.org/#/c/44881/

DocImpact
Implements: blueprint local-storage-volume-scheduling

Change-Id 는 자동으로 지정되므로 적지 않는다.

[ Commit message에 추가해야 할 내용 ]
DocImpact             -> 도큐먼트가 바껴야 할 때 넣음
SecurityImpact      -> 보안 문제가 있으니 OpenStack Secrurity Group 에서 
                                review해야 할 때 넣음
UpgradeImpact      -> 업그레이드에 영향을 미치는 경우 넣음
                                 (release notes 의 'Upgrade Notes' section 수정을 고려)

# git review 등록 (샘플 이외에 셋 중 아무거나 쓰면 됨)
# 샘플 양식
git push ssh://StephenAhn@review.openstack.org:29418/<Project Name> HEAD:refs/for/<Branch Name>

git push ssh://StephenAhn@review.openstack.org:29418/openstack/cinder HEAD:refs/for/bp/local-storage-volume-scheduling
git push review:openstack/cinder HEAD:refs/for/bp/local-storage-volume-scheduling
git review


[ unit test 수행을 위한 패키지 다운로드 ]
sudo apt-get install python-dev libssl-dev python-pip git-core libmysqlclient-dev libpq-dev
sudo apt-get install libxml2-dev libxslt-dev libvirt-dev
sudo apt-get install python-virtualenv testrepository

[ nova unit test ]
cd /opt/stack/nova
./run_tests.sh

# pep8 코딩 표준 테스트
./run_tests.sh -p

# netaddr>=0.7.6 에서 에러 발생 시
$ source .venv/bin/activate
$ wget https://github.com/downloads/drkjam/netaddr/netaddr-0.7.9.zip
$ unzip netaddr-0.7.9.zip
$ cd netaddr-0.7.9
$ python setup.py install

# ubuntu 12.04 에서 libvirt-python 1.2.5 설치 시 에러
ubuntu 12.04 에서는 libvirt 0.9.8 이 기본이므로 libvirt 1.2.0 으로 업그레이드 해야함
$ sudo apt-get update
sudo apt-get -y install \
    gcc \
    make \
    pkg-config \
    libxml2-dev \
    libgnutls-dev \
    libdevmapper-dev \
    libcurl4-gnutls-dev \
    python-dev \
    libpciaccess-dev \
    libxen-dev \
    libyajl-dev \
    libnl-dev

sudo mkdir -p /opt/libvirt
$ sudo chmod 00755 /opt/libvirt
$ sudo chown root:root /opt/libvirt
$ sudo chmod a+w /opt/libvirt
$ cd /opt/libvirt
$ wget http://libvirt.org/sources/libvirt-1.2.0.tar.gz
$ tar xzvf libvirt-1.2.0.tar.gz
$ mv libvirt-1.2.0 libvirt
$ cd libvirt
./configure \
    --prefix=/usr \
    --localstatedir=/var \
    --sysconfdir=/etc \
    --with-esx=yes \
    --with-xen=yes
$ make -j
$ sudo make install

$ ./run_tests.sh


# git review 시 권한이 없다는 메세지가 올 때 해결방법
git review

"fatal: ICLA contributor agreement requires current contact information.
Please review your contact information:
  https://review.openstack.org/#/settings/contact

fatal: The remote end hung up unexpectedly"

1. https://review.openstack.org/#/settings/contact 사이트에 접속
2. Contack information 중 아래의 Mailing Address Country, Phone Number, Fax Number 입력
3. "Contact information last updated on 날짜." 가 중간에 보이면 정상적으로 처리되었음
4. git review 실행




Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

1. Linux 의 경우 커널에 포함되어 있는 kvm-clock 을 사용하여 Host 머신과 동기화


2. Windows 의 경우 kvm-clock 이 제공되지 않으므로 다음 두가지를 활용하여 동기화

     - RTC (Real Time Clock)

        bcdedit /set {default} USEPLATFORMCLOCK on


        <clock offset='localtime'>

            <timer name='rtc' tickpolicy='catchup' track='guest'/>
            <timer name='pit' tickpolicy='delay'/>
            <timer name='hpet' present='no'/>
         </clock>


     - TSC(Time Stamp Counter)


Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

  1. BlogIcon 개발 초보 2019.03.19 15:25  댓글주소  수정/삭제  댓글쓰기

    안녕하세요. 저는 닷넷 개발자로 일하고 있습니다..
    구성환경을 제가 확인을 해볼수도 없고해서 문서로만 확인하고있습니다.

    kvm위에 올린 게스트 os의 시간을 동기화 하는 방법에 대해서 찾아보고 들어왔습니다.
    인터넷이 되지않는 폐쇄망 환경이기에 host측 or 게스트측에서 windows 시간동기화를 찾고 있습니다.

    host측 시간을 그대로 가져오는 방법이라 써 있는데.. libvirt 것을 이용해야하나요 ?

    Hyper-v에서는 가상화 데스크톱 즉 Windows Server 2012 혹은 Windows 10 올려서 인위적으로 변경후 가상화 데스크톱 환경 안의 서비스 hyper-v time synchronization service 통하여 Hyper-v 의 시간으로 동기화를 시킬수 있는 서비스가 존재하는데..

    KVM에서 지원하는게Windows Server 2012 혹은 Windows 10 시간 동기화하는 서비스가 존재할까요?

DevOn 2013 에서 OpenStack 에 관한 발표자료입니다.


OpenStack-Overview.pdf



Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

DevStack 으로 neutron 포함 설치시 방법을 공유합니다.

OS 는 Ubuntu Desktop 12.04 LTS 버전입니다.

* 최신 2014 2월 trunk 버전에서는 nova/virt/libvirt/driver.py 에서 python-libvirt 1.0.2+ 이상만 지원하므로 에러가 발생합니다. 
Ubuntu 최신 버전을 설치하세요. 

네트워크 세팅은 다음과 같습니다.

[ Single Node or Multi Node 의 경우 Controller Node 와 Compute Node 역활을 함 ]
eth0 : NAT type         192.168.75.136       Public Network
eth1 : Host-only type 192.168.230.136     Private Network

[ Multi Node 의 경우 두번째 추가 Compute Node ]
eth0 : NAT type         192.168.75.137       Public Network
eth1 : Host-only type 192.168.230.137     Private Network

[ User 선택 ]
stack 유저로 생성

[ visudo 세팅 ]
stack   ALL=(ALL:ALL) NOPASSWD:ALL

[ vi /etc/network/interfaces ]
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 192.168.75.136
        netmask 255.255.255.0
        gateway 192.168.75.2
        dns-nameservers 8.8.8.8 8.8.4.4

auto eth1
iface eth1 inet static
        address 192.168.230.136
        netmask 255.255.255.0

[ network-manager 제거 ]
sudo apt-get purge network-manager
sudo apt-get autoremove
sudo /etc/init.d/networking restart

[ proxy 사용 -> proxy 세팅 ]
sudo vi /etc/apt/apt.conf
Acquire::http::proxy "http://xx.xx.xx.xx:8080/";
Acquire::https::proxy "https://xx.xx.xx.xx:8080/";

sudo vi /etc/environment
http_proxy="http://xx.xx.xx.xx:8080/"
https_proxy="https://xx.xx.xx.xx:8080/"
no_proxy="ubuntu,localhost,127.0.0.1,192.168.75.136,192.168.230.136"

[ python import 오류 해결 ]
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

[ git 설치 및 user 세팅, proxy 사용 -> proxy 세팅 ]
sudo apt-get -y install git git-review

git config --global user.name "Stephen Ahn"
git config --global user.email "skanddh@gmail.com"
git config --global http.proxy http://xx.xx.xx.xx:8080
git config --global https.proxy https://xx.xx.xx.xx:8080
git config --list

[ remove dmidecode ]
sudo apt-get install libvirt-bin
sudo apt-get purge dmidecode
kill -9 [dmidecode process]
sudo apt-get autoremove

[ delete default virtual bridge ]
root 로 로그인하여 실행
virsh net-destroy default
virsh net-undefine default

[ Proxy 사용 -> curl 세팅 ]
curl 사용을 위해 crt 파일을 복사하고 xxx.cert 파일을 선택
cp xxx.crt  /usr/share/ca-certificates/extra
dpkg-reconfigure ca-certificates

[ Proxy 사용 -> ~/.pip/pip.conf 세팅 ]
[global]
cert = /usr/share/ca-certificates/extra/xxx.crt
index-url = http://pypi.gocept.com/simple/

[ DevStack clone ]
git clone https://github.com/openstack-dev/devstack.git

[ vi  lib/neutron_plugins/ovs_base ]
gre port 나 patch port 가 생성이 안된다는 q-agt 에러가 발생하면 Kernel 3.5.x-xx-generic 과 openvswitch 1.4.0 호환이 안되는 것이므로 다음과 같이 조치
openvswitch-datapath-dkms 대신 openvswitch-datapath-lts-raring-dkms 를 설치
 41  install_package make fakeroot dkms openvswitch-switch openvswitch-datapath-lts-raring-dkms linux-headers-$kernel_version

[ vi localrc ]
# Devstack localrc for Quantum all in one
# default
HOST_IP=192.168.230.136
SERVICE_HOST=192.168.230.136

# Compute 을 여러대 설치
MULTI_HOST=True

# Private subnet
FIXED_RANGE=10.0.0.0/24

# Nova-network service
#enable_service n-net
#FIXED_NETWORK_SIZE=256
#FLOATING_RANGE=192.168.75.192/26
#FLAT_INTERFACE=eth1
#PUBLIC_INTERFACE=eth0

# Neutron External subnet
NETWORK_GATEWAY=10.0.0.1
FLOATING_RANGE=192.168.75.0/24
PUBLIC_NETWORK_GATEWAY=192.168.75.2
Q_FLOATING_ALLOCATION_POOL=start=192.168.75.193,end=192.168.75.254

# Neutron configuration
Q_PLUGIN=ml2
Q_ML2_PLUGIN_TYPE_DRIVERS=local,flat,vlan,gre,vxlan
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8472)
Q_AGENT_EXTRA_SRV_OPTS=(local_ip=$HOST_IP)
#Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True

# Nova service
enable_service n-api
enable_service n-crt
enable_service n-obj
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service n-novnc
enable_service n-cauth

# Cinder service
enable_service cinder
enable_service c-api
enable_service c-vol
enable_service c-sch
enable_service c-bak

# Tempest service
enable_service tempest

# Neutron service
disable_service n-net
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-lbaas

# Controller Node
Q_HOST=$SERVICE_HOST

# vnc
VNCSERVER_LISTEN=0.0.0.0
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

# logs
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# system password
ADMIN_PASSWORD=패스워드
MYSQL_PASSWORD=패스워드
RABBIT_PASSWORD=패스워드
SERVICE_PASSWORD=패스워드
SERVICE_TOKEN=admin

# Cinder configuration
VOLUME_GROUP="cinder-volumes"
VOLUME_NAME_PREFIX="volume-"

# Heat service
enable_service heat
enable_service h-api
enable_service h-api-cfn
enable_service h-api-cw
enable_service h-eng

# Murano service
enable_service murano
enable_service murano-api
enable_service murano-engine

# Ceilometer service
CEILOMETER_BACKEND=mongo
CEILOMETER_NOTIFICATION_TOPICS=notifications,profiler
enable_service ceilometer
enable_service ceilometer-acompute
enable_service ceilometer-acentral
enable_service ceilometer-collector
enable_service ceilometer-api
enable_service ceilometer-alarm-evaluator
enable_service ceilometer-alarm-notifier

# Swift service
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account

# Trove service
enable_service trove
enable_service tr-api
enable_service tr-tmgr
enable_service tr-cond

# Images
# Use this image when creating test instances
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
# Use this image when working with Orchestration (Heat)
IMAGE_URLS+=",https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2"

KEYSTONE_CATALOG_BACKEND=sql
API_RATE_LIMIT=False
SWIFT_HASH=testing
SWIFT_REPLICAS=1
VOLUME_BACKING_FILE_SIZE=70000M

#scheduler
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler

# A clean install every time
#RECLONE=yes


[ Multi Node Compute 세팅 방법 ]
# vi localrc
HOST_IP=192.168.230.137
SERVICE_HOST=192.168.230.136

# Compute 을 여러대 설치
MULTI_HOST=True

# Neutron configuration
Q_PLUGIN=ml2
Q_ML2_PLUGIN_TYPE_DRIVERS=local,flat,vlan,gre,vxlan
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8472)
Q_AGENT_EXTRA_SRV_OPTS=(local_ip=$HOST_IP)
#Q_AGENT=openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True

# RabbitMQ, Compute
disable_all_services
enable_service rabbit
enable_service n-cpu
enable_service n-novnc

Nova-network Service
#enable_service n-net
#FIXED_RANGE=10.0.0.0/24
#FIXED_NETWORK_SIZE=256
#FLOATING_RANGE=192.168.75.192/26
#FLAT_INTERFACE=eth0
#PUBLIC_INTERFACE=eth0

Neutron L2 Service
enable_service neutron
enable_service q-agt

# Cinder service
enable_service cinder

# Cinder configuration
#enable_service c-vol
#VOLUME_GROUP="cinder-volumes"
#VOLUME_NAME_PREFIX="volume-"

# Controller Node
Q_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST

# vnc
VNCSERVER_LISTEN=0.0.0.0
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

# logs
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# system password
MYSQL_PASSWORD=패스워드
RABBIT_PASSWORD=패스워드

# A clean install every time
#RECLONE=yes

# vi /etc/nova/nova.conf 수정
neutron_admin_password = 패스워드
sql_connection = mysql://root:패스워드@192.168.230.136/nova?charset=utf8

# vi /etc/cinder/cinder.conf 수정
sql_connection = mysql://root:패스워드@192.168.230.136/cinder?charset=utf8
my_ip = 192.168.230.137




[ Murano 설치 ]
$ git clone git://git.openstack.org/openstack/murano
$ cd murano/contrib/devstack
$ cp lib/murano ${DEVSTACK_DIR}/lib
$ cp lib/murano-dashboard ${DEVSTACK_DIR}/lib
$ cp extras.d/70-murano.sh ${DEVSTACK_DIR}/extras.d


[ Heat OSprofiler enabled ]
$ echo -e "[profiler]\nprofiler_enabled = True\ntrace_sqlalchemy = True\n" >> /etc/heat/heat.conf

$ heat --profile SECRET_KEY stack-list
# it will print <Trace ID>

osprofiler trace show --html <Trace ID>


[ vi  localrc ]
git 프로토콜로 다운로드가 안될 경우 http 로 변경
GIT_BASE=http://git.openstack.org

[ devstack 설치 ]
./stack.sh


[ tempest 에러 시 해결 ]
$ wget https://pymox.googlecode.com/files/mox-0.5.3.tar.gz
$ tar xvf mox-0.5.3.tar.gz
$ cd mox-0.5.3
$ sudo python setup.py install

$ vi ~/.pydistutils.cfg
[easy_install]
index_url =  http://mirror.dfw.rax.openstack.org/pypi/simple
allow_hosts = *.openstack.org




[ public 연결 세팅 - Neutron ]
br-ex 에 gateway 192.168.75.2 가 추가되어 public ip 통신이 안되므로 삭제
VM과 외부 external 을 연결할려면 add-port 를 수행, 단 이렇게 하면 host 에서 외부연결이 안됨
sudo ip link set up br-int
sudo ip link set up br-tun
sudo ip addr del 192.168.75.2/24 dev br-ex
sudo ovs-vsctl add-port br-ex eth0
#ifconfig br-ex promisc up

[ host 와 인터넷 연결 세팅 - Neutron ]
host 에서 인터넷 연결을 할려면 del-port 를 수행
sudo ovs-vsctl del-port eth0

[ public 연결 세팅 - nova-network ]
eth1 의 host-only 네트워크에 대한 dhcp 를 끈다. 이걸 안끄면 vm 생성 시 내부 ip 가 192.168.230.x 대역을 받음
br100 은 eth1 내부 네트워크와만 연결되어야 하므로 br100 과 eth0 연결을 끊는다.
br100 이 eth0 로 부터 가져온 외부 네트워크 ip 는 eth0 에 돌려준다.
sudo brctl show br100
sudo brctl delif br100 eth0
sudo ip addr del 192.168.75.136/24 dev br100
sudo ip addr add 192.168.75.136/24 dev eth0

[ default gateway 수정 - nova-network ]
sudo route del -net 0.0.0.0/0 gw 192.168.208.2 dev br100
sudo route add -net 0.0.0.0/0 gw 192.168.208.2 dev eth0

[ cinder.conf  수정 - 메세지 호출을 위해서 ]
notification_driver=messagingv2


[ cli 호출 ]
. openrc admin demo

[ 서비스 start ]
screen -c stack-screenrc


[ default sec group rule 추가 ]
openstack security group rule create --proto icmp --src-ip 0.0.0.0/0 --dst-port -1 --ingress 5f7fe4ab-7069-490e-b95d-946a0148e523

openstack security group rule create --proto tcp --src-ip 0.0.0.0/0 --dst-port 1:65535 --ingress 5f7fe4ab-7069-490e-b95d-946a0148e523

[ nova boot ]
nova boot --flavor m1.tiny --image 32dc6f3e-83fc-4b18-ba08-c06a28bdac38 --nic net-id=7fa105b5-fcc7-4ce9-abbe-c49b867bb0b3  --key-name magnum-key --security-groups 6b84dbca-3e20-4ce5-9774-9c3128a2eb5f test-01


sudo ip netns
qrouter-0e8971de-9119-4bed-9c70-288a0ed15581
qdhcp-7fa105b5-fcc7-4ce9-abbe-c49b867bb0b3


[ vi stopnova.sh ]
서비스를 내리기 위해서 shell 작성
#!/bin/bash

rm -rf /opt/stack/status/stack/*

cd /usr/local/bin
for i in $( ls nova-* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls cinder-* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls keystone-* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls glance-* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls heat* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls ceilometer* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls trove* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls neutron* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done

for i in $( ls ovs* )
    do sudo kill -9 `ps aux | grep -v grep | grep $i  | awk '{print $2}'`
done


[ devstack 초기화 ]
./unstack.sh
./stopnova.sh

# 컴파일 소스 초기화
cd /opt/stack
find -name "*.pyc" | xargs rm

# VM 초기화
sudo rm -rf /etc/libvirt/qemu/inst*
sudo virsh list | grep inst | awk '{print $1}' | xargs -n1 virsh destroy

# br-tun 삭제
sudo ip link set dev br-tun down
sudo ovs-vsctl del-br br-tun

# vxlan device 삭제
sudo ip link delete dev vxlan_sys_4789

./clean.sh --all (필요시)
sudo apt-get purge mysql-server  (필요시)
sudo apt-get autoremove    (필요시)


[ 설치된 패키지 지우기 ]
# cd /usr/local/lib/python2.7/dist-packages
find -depth -maxdepth 1 -name "*swift*" | xargs sudo rm -rf


[ swift 가 실행안되는 오류 해결방안 ]
해당 포트로 떠 있는 프로세스를 찾아 죽인다.
netstat -apn | grep 6011
netstat -apn | grep 6012
netstat -apn | grep 6013       swift object server


[ cinder volume 없을 경우 새로 생성 ]
$ cd /opt/stack/data

$ sudo losetup -a
$ file /opt/stack/data/cinder-volumes-lvmdriver-1-backing-file

$ dd if=/dev/zero of=cinder-volumes-lvmdriver-1-backing-file bs=1 count=0 seek=10G ( 원하는 용량)
$ sudo losetup /dev/loop2 cinder-volumes-lvmdriver-1-backing-file
$ sudo fdisk /dev/loop2
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w

$ sudo pvcreate /dev/loop2
$ vgcreate cinder-volumes-lvmdriver-1 /dev/loop2


[ br-tun, br-ex, br-int 가 ip a 로 안보일 때 ]
openvswitch restart 시킴
$ sudo service openvswitch-switch restart


[ Glance Image 가 DB 에는 보이나 file 이 Upload 되지 않았을 때 ]
$ sudo losetup -a

/dev/loop0: [2049]:1733858 (/opt/stack/data/swift/drives/images/swift.img)

/dev/loop1: [2049]:1733859 (/opt/stack/data/cinder-volumes-default-backing-file)

/dev/loop2: [2049]:1733860 (/opt/stack/data/cinder-volumes-lvmdriver-1-backing-file)


# swift 스토리지 마운트

$ sudo mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8 /opt/stack/data/swift/drives/images/swift.img /opt/stack/data/swift/drives/sdb1


$ sudo losetup /dev/loop1 /opt/stack/data/cinder-volumes-default-backing-file
$ sudo losetup /dev/loop2 /opt/stack/data/cinder-volumes-lvmdriver-1-backing-file



# swift 스토리지 생성
$ mkfs.xfs -f -i size=1024 /opt/stack/data/swift/drives/images/swift.img

# image 가 없으면 image 업로드
$ . openrc
$ ./tools/upload_image.sh http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-uec.tar.gz

[ 서버가 뜰 때 자동으로 cinder-volume, swift 자동으로 연결하기 ]
$ sudo vi /etc/init.d/init-devstack

#! /bin/sh
### BEGIN INIT INFO
# Provides:          init-devstack
# Required-Start:
# Required-Stop:
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Execute bind storage.
# Description:
### END INIT INFO

PATH=/sbin:/usr/sbin:/bin:/usr/bin

case "$1" in
  start)
    losetup /dev/loop1 /opt/stack/data/cinder-volumes-backing-file
    mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8 /opt/stack/data/swift/drives/images/swift.img /opt/stack/data/swift/drives/sdb1
    ;;
  restart|reload|force-reload)
    echo "Error: argument '$1' not supported" >&2
    exit 3
    ;;
  stop)
    echo "Error: argument '$1' not supported" >&2
    exit 3
    ;;
  *)
    echo "Usage: $0 start" >&2
    exit 3
    ;;
esac

$ sudo  update-rc.d init-devstack defaults


[ nova-compute LOG 를 파일로 떨어뜨리고 싶을 때 ]
cd /opt/stack/nova && nohup /usr/local/bin/nova-compute > /opt/stack/logs/screen/nova-compute.log 2>&1 &



[ Ubuntu Server 14.04 Image Upload ]
이름 : Ubuntu Server 14.04 64-bit
경로 : http://uec-images.ubuntu.com/releases/14.04.2/14.04.2/ubuntu-14.04-server-cloudimg-amd64-disk1.img
포맷 : QCOW2 - QEMU Emulator
최소 디스크 : 5
최소 RAM : 1024

./tools/upload_image.sh http://uec-images.ubuntu.com/releases/14.04.2/14.04.2/ubuntu-14.04-server-cloudimg-amd64-disk1.img

glance image-update --min-disk 5 --min-ram 1024 5f1949a1-60da-475d-83a3-a4f49be35d77

아래 사이트 이미지 참고
http://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-amd64-disk1.img (Ubuntu Server 10.04 64-bit)
https://help.ubuntu.com/community/UEC/Images
http://uec-images.ubuntu.com/releases/


[ Source Contribution 방법 ]

# clone a project
git clone git://github.com/openstack/nova.git

# port 확인 (브라우저에서 확인)
https://review.openstack.org/ssh_info

# Testing Gerrit Connections
ssh -p 29418 StephenAhn@review.openstack.org

# Setting username
git config --global --add gitreview.username "StephenAhn"

# 단축 경로 저장
vi ~/.ssh/config
Host review
  Hostname review.openstack.org
  Port 29418
  User StephenAhn

# gerrit remote 확인
git review -s

# gerrit remote 확인 시 에러나면 remote 추가
git remote add gerrit ssh://StephenAhn@review.openstack.org:29418/openstack/nova.git

# 최신 소스로 다운받기
git remote update
git checkout master
git pull --ff-only origin master

# blueprint 채널 생성
git checkout -b bp/local-storage-volume-scheduling

# .mailmap 에 본인 email 추가
vi .mailmap
    <skanddh@gmail.com> <seungkyu.ahn@samsung.com>

# commit message 입력
git commit --amend

첫번째 라인은 50자 이내로 간단히 요약을 쓴다.
[공백라인]
설명을 적되 라인은 72자가 넘어가면 다음 라인에 쓴다.
.....
Implements: blueprint local-storage-volume-scheduling

Change-Id 는 자동으로 지정되므로 적지 않는다.

# git review 등록 (샘플 이외에 셋 중 아무거나 쓰면 됨)
# 샘플 양식
git push ssh://StephenAhn@review.openstack.org:29418/<Project Name> HEAD:refs/for/<Branch Name>

git push ssh://StephenAhn@review.openstack.org:29418/openstack/cinder HEAD:refs/for/bp/local-storage-volume-scheduling
git push review:openstack/cinder HEAD:refs/for/bp/local-storage-volume-scheduling
git review

# 아래 두 명령어가 같은 내용임
ssh -p 29418 review.openstack.org gerrit ls-projects
ssh review gerrit ls-projects



[ unittest 수행 ]
sudo apt-get install python-dev libssl-dev python-pip git-core libmysqlclient-dev libpq-dev
sudo apt-get install libxml2-dev libxslt-dev libvirt-dev
sudo apt-get install python-virtualenv testrepository


[ nova unit test ]
cd /opt/stack/nova
./run_tests.sh

# pep8 코딩 표준 테스트
./run_tests.sh -p

# netaddr>=0.7.6 에서 에러 발생 시
$ source .venv/bin/activate
$ wget https://github.com/downloads/drkjam/netaddr/netaddr-0.7.9.zip
$ unzip netaddr-0.7.9.zip
$ cd netaddr-0.7.9
$ python setup.py install

# ubuntu 12.04 에서 libvirt-python 1.2.5 설치 시 에러
ubuntu 12.04 에서는 libvirt 0.9.8 이 기본이므로 libvirt 1.2.0 으로 업그레이드 해야함
$ sudo apt-get update
sudo apt-get -y install \
    gcc \
    make \
    pkg-config \
    libxml2-dev \
    libgnutls-dev \
    libdevmapper-dev \
    libcurl4-gnutls-dev \
    python-dev \
    libpciaccess-dev \
    libxen-dev \
    libyajl-dev \
    libnl-dev

sudo mkdir -p /opt/libvirt
$ sudo chmod 00755 /opt/libvirt
$ sudo chown root:root /opt/libvirt
$ sudo chmod a+w /opt/libvirt
$ cd /opt/libvirt
$ wget http://libvirt.org/sources/libvirt-1.2.0.tar.gz
$ tar xzvf libvirt-1.2.0.tar.gz
$ mv libvirt-1.2.0 libvirt
$ cd libvirt
./configure \
    --prefix=/usr \
    --localstatedir=/var \
    --sysconfdir=/etc \
    --with-esx=yes \
    --with-xen=yes
$ make -j
$ sudo make install

$ ./run_tests.sh

[ 단위 모듈로 테스트 하기 ]
To run the tests in the cinder/tests/scheduler directory:
./run_tests.sh scheduler

To run the tests in the cinder/tests/test_libvirt.py file:
$ ./run_tests.sh test_libvirt

To run the tests in the HostStateTestCase class in cinder/tests/test_libvirt.py:
$ ./run_tests.sh test_libvirt.HostStateTestCase

To run the ToPrimitiveTestCase.test_dict test method in cinder/tests/test_utils.py:
$ ./run_tests.sh test_utils.ToPrimitiveTestCase.test_dict


[ tempest 테스트 ]

# 옛날 방식 
$ cd /opt/stack/tempest
$ nosetests tempest/scenario/test_network_basic_ops.py


# 최신 방식
$ cd /opt/stack/tempest
$ ostestr    혹은   testr


$ git clone https://github.com/openstack/tempest/
$ pip install tempest/
$ cd tempest
$ tempest init cloud-01
$ cd cloud-01
$ cp -r /opt/stack/tempest/etc/ .
$ ../run_tempest.sh -C etc/tempest.conf

# tempest.conf
[DEFAULT]
debug = True
log_file = tempest.log
use_stderr = False
use_syslog = False

[oslo_concurrency]
lock_path = /opt/stack/data/tempest

[compute]
fixed_network_name = private
ssh_connect_method = floating
flavor_ref_alt = 84
flavor_ref = 42
image_alt_ssh_user = cirros
image_ref_alt = 8bbeeb3d-fea4-43ee-8c27-5b1015693590
image_ref = 8bbeeb3d-fea4-43ee-8c27-5b1015693590
ssh_user = cirros
build_timeout = 196

[volume]
build_timeout = 196

[identity]
auth_version = v2
uri_v3 = http://192.168.230.161:5000/v3
uri = http://192.168.230.161:5000/v2.0/

[auth]
use_dynamic_credentials = True
tempest_roles = Member
admin_domain_name = Default
admin_tenant_id = b96b0deb693842b2a09a0d91832e41ea
admin_tenant_name = admin
admin_password = imsi00
admin_username = admin

[image-feature-enabled]
deactivate_image = True

[validation]
network_for_ssh = private
image_ssh_user = cirros
ssh_timeout = 196
ip_version_for_ssh = 4
run_validation = False
connect_method = floating

[compute-feature-enabled]
allow_duplicate_networks = True
attach_encrypted_volume = True
live_migrate_paused_instances = True
preserve_ports = True
api_extensions = all
block_migration_for_live_migration = False
change_password = False
live_migration = False
resize = True
max_microversion = latest

[network]
default_network = 10.0.0.0/24
public_router_id =
public_network_id = 9353aab8-5f65-4daa-8c30-d90b588ec36d
tenant_networks_reachable = false
api_version = 2.0

[network-feature-enabled]
api_extensions = all
ipv6_subnet_attributes = True
ipv6 = True

[orchestration]
stack_owner_role = _member_
build_timeout = 900
instance_type = m1.heat


[scenario]
large_ops_number = 0
img_file = cirros-0.3.4-x86_64-disk.img
aki_img_file = cirros-0.3.4-x86_64-vmlinuz
ari_img_file = cirros-0.3.4-x86_64-initrd
ami_img_file = cirros-0.3.4-x86_64-blank.img
img_dir = /home/stack/Documents/github/devstack/files/images/cirros-0.3.4-x86_64-uec

[telemetry-feature-enabled]
events = True

[object-storage-feature-enabled]
discoverable_apis = all

[volume-feature-enabled]
api_extensions = all
volume_services = True
incremental_backup_force = True

[dashboard]
dashboard_url = http://192.168.230.161/

[cli]
cli_dir = /usr/local/bin

[service_available]
trove = True
ironic = False
sahara = False
horizon = True
ceilometer = True
heat = True
swift = True
cinder = True
neutron = True
nova = True
glance = True
key = True





[ 최신 버전으로 설치 ]
# devstack 최신소스 다운로드
$ cd Git/devstack
$ git pull --ff-only origin master

# OpenStack 최신소스 다운로드
$ vi git_update.sh

#! /bin/bash

cd /opt/stack/ceilometer/
git checkout master
git pull origin master

cd /opt/stack/cinder/
git checkout master
git pull origin master

cd /opt/stack/cliff/
git checkout master
git pull origin master

cd /opt/stack/data/
git checkout master
git pull origin master

cd /opt/stack/glance/
git checkout master
git pull origin master

cd /opt/stack/heat/
git checkout master
git pull origin master

cd /opt/stack/horizon/
git checkout master
git pull origin master

cd /opt/stack/keystone/
git checkout master
git pull origin master

cd /opt/stack/logs/
git checkout master
git pull origin master

cd /opt/stack/neutron/
git checkout master
git pull origin master

cd /opt/stack/nova/
git checkout master
git pull origin master

cd /opt/stack/noVNC/
git checkout master
git pull origin master

cd /opt/stack/oslo.config/
git checkout master
git pull origin master

cd /opt/stack/oslo.messaging/
git checkout master
git pull origin master

cd /opt/stack/oslo.rootwrap/
git checkout master
git pull origin master

cd /opt/stack/oslo.vmware/
git checkout master
git pull origin master

cd /opt/stack/pbr/
git checkout master
git pull origin master

cd /opt/stack/pycadf/
git checkout master
git pull origin master

cd /opt/stack/python-ceilometerclient/
git checkout master
git pull origin master

cd /opt/stack/python-cinderclient/
git checkout master
git pull origin master

cd /opt/stack/python-glanceclient/
git checkout master
git pull origin master

cd /opt/stack/python-heatclient/
git checkout master
git pull origin master

cd /opt/stack/python-keystoneclient/
git checkout master
git pull origin master

cd /opt/stack/python-neutronclient/
git checkout master
git pull origin master

cd /opt/stack/python-novaclient/
git checkout master
git pull origin master

cd /opt/stack/python-openstackclient/
git checkout master
git pull origin master

cd /opt/stack/python-swiftclient/
git checkout master
git pull origin master

cd /opt/stack/requirements/
git checkout master
git pull origin master

cd /opt/stack/stevedore/
git checkout master
git pull origin master

cd /opt/stack/swift/
git checkout master
git pull origin master

cd /opt/stack/taskflow/
git checkout master
git pull origin master

cd /opt/stack/tempest/
git checkout master
git pull origin master


# devstack 재설치
$ ./stack.sh


[ 이전 Package 제거 ]
$ vi clean_package.sh

#! /bin/bash

cd /usr/local/lib/python2.7/dist-packages
find -depth -maxdepth 1 -name "*nova*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*ceilometer*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*cinder*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*glance*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*keystone*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*horizon*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*neutron*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*oslo*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*heat*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*pbr*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*pycadf*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*openstackclient*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*swift*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*stevedore*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*taskflow*" | xargs sudo rm -rf
find -depth -maxdepth 1 -name "*tempest*" | xargs sudo rm -rf



[ Volume Attach 시에 VM 안에서 Volume 을 인식 못할 때 ]
아래 명령을 수행한 후 /dev/vdb 가 보이는지 확인
# echo 1 > /sys/bus/pci/rescan 

[ root를 EBS Boot on Volume 으로 생성하면서 Volume Attach 를 할 경우 ]
# nova boot [name] --flavor [flavorid] --block-device id=[imageid],source=image,dest=volume,size=10,bootindex=0,shutdown=remove 
--block-device id=[volumeid],source=volume,dest=volume,size=100,bootindex=1

[ pip 패키지를 캐시에 다운로드만 하려고 할 때 (설치는 하지 않음) ]
# sudo pip install python-openstackclient --download=/var/cache/pip


[ cpu, memory overcommit 설정 ]
$ vi /etc/nova/nova.conf

scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 100.0
disk_allocation_ratio = 100.0



[ vm 이 DNS를 찾는지 확인 ]
$ sudo tcpdump -i tap2404d0f1-25 -n -v udp port 53



[ vm 에 할당되는 DNS 변경 ]
$ neutron subnet-list
$ neutron subnet-update <subnet> --dns_nameservers list=true 8.8.8.8 8.8.4.4




[ Horizon 접속 시 no such table: django_session 에러가 날 때 ]
아래 명령으로 db 테이블을 생성해야 함
$ cd /opt/stack/horizon
$ python manage.py syncdb




[ Murano WordPress Package Import ]
$ export MURANO_REPO_URL=http://storage.apps.openstack.org
$ murano package-import io.murano.apps.WordPress

wordpress 접속
http://192.168.75.209/wordpress




[ OpenStack Source virtualenv pip install ]
$ mkdir -p ~/.pip
$ vi ~/.pip/pip.conf
[global]
#index-url=https://pypi.python.org/pypi/
#index-url=http://pypi.gocept.com/simple/
index-url=https://pypi.python.org/simple/

$ cd ~Documents/github/Virtualenvs
$ virtualenv mitaka
$ cd mitaka
$ . bin/activate
$ pip install -r requirements.txt --trusted-host pypi.python.org








Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

  1. BlogIcon kenu 2013.11.11 01:43  댓글주소  수정/삭제  댓글쓰기

    관련글 http://okjsp.tistory.com/1165644188 포스팅합니다.

[ Mac vmware 에 설치한 Ubuntu 에 vt-x 활성화하기 위해 vmx 파일 수정]

vhv.enable = "TRUE"


[ ssh server 설치 ]

sudo apt-get install -y openssh-server


[ 구조 설명 ]

Cloud Controller

    - hostname : controller

    - eth0 : 192.168.75.131

    - eth1 : 192.168.230.131

    - 설치 모듈 : mysql, rabbitMQ, keystone, glance, nova-api,

                       cinder-api, cinder-scheduler, cinder-volume, open-iscsi, iscsitarget

                       quantum-server

Network

    - hostname : network

    - eth0 : 192.168.75.132

    - eth1 : 192.168.230.132

    - eth2 : 

    - eth3 : 192.168.75.133

    - 설치 모듈 : openvswitch-switch openvswitch-datapath-dkms

                       quantum-plugin-openvswitch-agent dnsmasq quantum-dhcp-agent quantum-l3-agent

Compute

    - hostname : compute

    - eth0 : 192.168.75.134

    - eth1 : 192.168.230.134

    - eth2 : 

    - 설치 모듈 : openvswitch-switch openvswitch-datapath-dkms 

                       quantum-plugin-openvswitch-agent, nova-compute-kvm, open-iscsi, iscsitarget


[ network 설정 ]

eth0 : public 망 (NAT)                          192.168.75.0/24

eth1 : private host 망 Custom(VMnet2)  192.168.230.0/24

eth2 : vm private 망                             10.0.0.0/24

eth3 : vm Quantum public 망(NAT)        192.168.75.0/26


[ hostname 변경 ]

vi /etc/hosts

192.168.230.131 controller

192.168.230.132 network

192.168.230.134 compute


vi /etc/hostname

   controller


hostname -F /etc/hostname

새로운 터미널로 확인


[ eth0 eth1 설정 ]

vi /etc/network/interfaces


# The loopback network interface

auto lo

iface lo inet loopback


# Host Public 망

auto eth0

iface eth0 inet static

      address 192.168.75.131

      netmask 255.255.255.0

      gateway 192.168.75.2

      dns-nameservers 8.8.8.8 8.8.4.4


# Host Private 망

auto eth1

iface eth1 inet static

      address 192.168.230.131

      netmask 255.255.255.0


service networking restart


[ vmware 에 설치한 Ubuntu 에서 가상화를 지원하는지 확인 ]

egrep '(vmx|svm)' --color=always /proc/cpuinfo


[ nova 설치 매뉴얼 ]

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst


[ nova 소스 위치 ]

nova link source = /usr/lib/python2.7/dist-packages/nova

nova original source = /usr/share/pyshared/nova


##################   모든 node 공통 설치하기   #####################


[ root 패스워드 세팅 ]

sudo su -

passwd


[ repository upgrade ]

apt-get install -y ubuntu-cloud-keyring python-software-properties software-properties-common python-keyring


echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list


apt-get update

apt-get upgrade

apt-get dist-upgrade


[ screen vim 설치 ]

sudo apt-get install -y screen vim


[ .screenrc ]

vbell off

autodetach on

startup_message off

defscrollback 1000

attrcolor b ".I"

termcap xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm'

defbce "on"

#term screen-256color


## apps I want to auto-launch

#screen -t irssi irssi

#screen -t mutt mutt


## statusline, customized. (should be one-line)

hardstatus alwayslastline '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}[%{W}%n%f %t%?(%u)%?%{=b kR}]%{= kw}%?%+Lw%?%?%= %{g}][%{Y}%l%{g}]%{=b C}[ %D %m/%d %C%a ]%{W}'


[ .vimrc ]

syntax on

set nocompatible

set number

set backspace=indent,eol,start

set tabstop=4

set shiftwidth=4

set autoindent

set visualbell

set laststatus=2

set statusline=%h%F%m%r%=[%l:%c(%p%%)]

set hlsearch

set background=dark

set expandtab

set tags=./tags,./TAGS,tags,TAGS,/usr/share/pyshared/nova/tags

set et

" Removes trailing spaces
function! TrimWhiteSpace()
    %s/\s\+$//e
endfunction

nnoremap <silent> <Leader>rts :call TrimWhiteSpace()<CR>
autocmd FileWritePre    * :call TrimWhiteSpace()
autocmd FileAppendPre   * :call TrimWhiteSpace()
autocmd FilterWritePre  * :call TrimWhiteSpace()
autocmd BufWritePre     * :call TrimWhiteSpace()


[ remove dmidecode ]

apt-get purge dmidecode

apt-get autoremove

kill -9 [dmidecode process]


[ root 일 때 nova 계정이 없을 경우 유저 및 권한 설정 ]

adduser nova


visudo

   nova     ALL=(ALL:ALL) NOPASSWD:ALL


[ ntp 설치 ]

apt-get install -y ntp


vi /etc/ntp.conf

#server 0.ubuntu.pool.ntp.org

#server 1.ubuntu.pool.ntp.org

#server 2.ubuntu.pool.ntp.org

#server 3.ubuntu.pool.ntp.org

server time.bora.net


service ntp restart


# 한국 시간 세팅 및 최초 시간 맞추기

ntpdate -u time.bora.net

ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime


[ mysql client 설치 ]

apt-get install -y python-mysqldb mysql-client-5.5


[ KVM 설치 및 확인 ]

apt-get install -y cpu-checker

apt-get install -y kvm libvirt-bin pm-utils

kvm-ok


# kvm 이 load 되어 있는지 확인하기

lsmod | grep kvm


# 서버 reboot 시에 kvm 자동 load 추가

vi /etc/modules

   kvm

   kvm_intel


vi /etc/libvirt/qemu.conf

   cgroup_device_acl = [

   "/dev/null", "/dev/full", "/dev/zero",

   "/dev/random", "/dev/urandom",

   "/dev/ptmx", "/dev/kvm", "/dev/kqemu",

   "/dev/rtc", "/dev/hpet","/dev/net/tun"

   ]


# delete default virtual bridge

virsh net-destroy default

virsh net-undefine default


# enable live migration

vi /etc/libvirt/libvirtd.conf

   listen_tls = 0

   listen_tcp = 1

   auth_tcp = "none"


vi /etc/init/libvirt-bin.conf

   env libvirtd_opts="-d -l"


vi /etc/default/libvirt-bin

   libvirtd_opts="-d -l"


service dbus restart

service libvirt-bin restart


[ bridge 설치 ]

apt-get install -y vlan bridge-utils


[ IP_Forwarding 설정 ]

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

sysctl net.ipv4.ip_forward=1


##################   Cloud Controller 설치하기   #####################


[ ntp 세팅 ]

vi /etc/ntp.conf

   server time.bora.net

service ntp restart


[ network 세팅 ]

vi /etc/network/interfaces


# The loopback network interface

auto lo

iface lo inet loopback


# Host Public 망

auto eth0

iface eth0 inet static

      address 192.168.75.131

      netmask 255.255.255.0

      gateway 192.168.75.2

      dns-nameservers 8.8.8.8 8.8.4.4


# Host Private 망

auto eth1

iface eth1 inet static

      address 192.168.230.131

      netmask 255.255.255.0


service networking restart


[ hostname 변경 ]

vi /etc/hosts

192.168.230.131 controller

192.168.230.132 network

192.168.230.134 compute


vi /etc/hostname

   controller


hostname -F /etc/hostname


[ mysql db 설치 ]

apt-get install -y python-mysqldb mysql-server                 password : 임시 패스워드

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

service mysql restart


[ rabbitmq server install ]

apt-get install -y rabbitmq-server


# user 변환

sudo su - nova


[ Database 세팅 ]

mysql -u root -p

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE quantum;

GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'controller' IDENTIFIED BY '임시 패스워드';


CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '임시 패스워드';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' IDENTIFIED BY '임시 패스워드';


# grant 가 안될 때

use mysql;


UPDATE user SET

Select_priv = 'Y',

Insert_priv = 'Y',

Update_priv = 'Y',

Delete_priv = 'Y',

Create_priv = 'Y',

Drop_priv = 'Y',

Reload_priv = 'Y',

Shutdown_priv = 'Y',

Process_priv = 'Y',

File_priv = 'Y',

Grant_priv = 'Y',

References_priv = 'Y',

Index_priv = 'Y',

Alter_priv = 'Y',

Show_db_priv = 'Y',

Super_priv = 'Y',

Create_tmp_table_priv = 'Y',

Lock_tables_priv = 'Y',

Execute_priv = 'Y',

Repl_slave_priv = 'Y',

Repl_client_priv = 'Y',

Create_view_priv = 'Y',

Show_view_priv = 'Y',

Create_routine_priv = 'Y',

Alter_routine_priv = 'Y',

Create_user_priv = 'Y',

Event_priv = 'Y',

Trigger_priv = 'Y',

Create_tablespace_priv = 'Y'

WHERE user IN ('keystone', 'glance', 'nova', 'quantum', 'cinder');


[ keystone 설치 ]

sudo apt-get install -y keystone

sudo service keystone status

sudo rm /var/lib/keystone/keystone.db


sudo vi /etc/keystone/keystone.conf

connection = mysql://keystone:임시 패스워드@controller/keystone

token_format = UUID


sudo service keystone restart

sudo keystone-manage db_sync


[ keystone 세팅 ]

vi keystone_basic.sh

#!/bin/sh

#

# Keystone basic configuration 


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#

HOST_IP=192.168.230.131

ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin_pass}

SERVICE_PASSWORD=${SERVICE_PASSWORD:-service_pass}

export SERVICE_TOKEN="ADMIN"

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"

SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}


get_id () {

    echo `$@ | awk '/ id / { print $4 }'`

}


# Tenants

ADMIN_TENANT=$(get_id keystone tenant-create --name=admin)

SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME)



# Users

ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)



# Roles

ADMIN_ROLE=$(get_id keystone role-create --name=admin)

KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin)

KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin)


# Add Roles to Users in Tenants

keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT

keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT


# The Member role is used by Horizon and Swift

MEMBER_ROLE=$(get_id keystone role-create --name=Member)


# Configure service users/roles

NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE


GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE


QUANTUM_USER=$(get_id keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE


CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)

keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE


vi keystone_endpoints_basic.sh

#!/bin/sh

#

# Keystone basic Endpoints


# Mainly inspired by https://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / Institut Telecom

#

# Support: openstack@lists.launchpad.net

# License: Apache Software License (ASL) 2.0

#


# Host address

HOST_IP=192.168.230.131

EXT_HOST_IP=192.168.75.131

VOLUME_HOST_IP=192.168.230.131

VOLUME_EXT_HOST_IP=192.168.75.131

NETWORK_HOST_IP=192.168.230.132

NETWORK_EXT_HOST_IP=192.168.75.133


# MySQL definitions

MYSQL_USER=keystone

MYSQL_DATABASE=keystone

MYSQL_HOST=$HOST_IP

MYSQL_PASSWORD=임시 패스워드


# Keystone definitions

KEYSTONE_REGION=RegionOne

export SERVICE_TOKEN=ADMIN

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"


while getopts "u:D:p:m:K:R:E:T:vh" opt; do

  case $opt in

    u)

      MYSQL_USER=$OPTARG

      ;;

    D)

      MYSQL_DATABASE=$OPTARG

      ;;

    p)

      MYSQL_PASSWORD=$OPTARG

      ;;

    m)

      MYSQL_HOST=$OPTARG

      ;;

    K)

      MASTER=$OPTARG

      ;;

    R)

      KEYSTONE_REGION=$OPTARG

      ;;

    E)

      export SERVICE_ENDPOINT=$OPTARG

      ;;

    T)

      export SERVICE_TOKEN=$OPTARG

      ;;

    v)

      set -x

      ;;

    h)

      cat <<EOF

Usage: $0 [-m mysql_hostname] [-u mysql_username] [-D mysql_database] [-p mysql_password]

       [-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url ] 

       [ -T keystone_token ]

          

Add -v for verbose mode, -h to display this message.

EOF

      exit 0

      ;;

    \?)

      echo "Unknown option -$OPTARG" >&2

      exit 1

      ;;

    :)

      echo "Option -$OPTARG requires an argument" >&2

      exit 1

      ;;

  esac

done  


if [ -z "$KEYSTONE_REGION" ]; then

  echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_TOKEN" ]; then

  echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2

  missing_args="true"

fi


if [ -z "$SERVICE_ENDPOINT" ]; then

  echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2

  missing_args="true"

fi


if [ -z "$MYSQL_PASSWORD" ]; then

  echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2

  missing_args="true"

fi


if [ -n "$missing_args" ]; then

  exit 1

fi

 

keystone service-create --name nova --type compute --description 'OpenStack Compute Service'

keystone service-create --name cinder --type volume --description 'OpenStack Volume Service'

keystone service-create --name glance --type image --description 'OpenStack Image Service'

keystone service-create --name keystone --type identity --description 'OpenStack Identity'

keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service'

keystone service-create --name quantum --type network --description 'OpenStack Networking service'


create_endpoint () {

  case $1 in

    compute)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$HOST_IP"':8774/v2/$(tenant_id)s'

    ;;

    volume)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$VOLUME_EXT_HOST_IP"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$VOLUME_HOST_IP"':8776/v1/$(tenant_id)s'

    ;;

    image)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':9292/v2' --adminurl 'http://'"$HOST_IP"':9292/v2' --internalurl 'http://'"$HOST_IP"':9292/v2'

    ;;

    identity)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':5000/v2.0' --adminurl 'http://'"$HOST_IP"':35357/v2.0' --internalurl 'http://'"$HOST_IP"':5000/v2.0'

    ;;

    ec2)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$EXT_HOST_IP"':8773/services/Cloud' --adminurl 'http://'"$HOST_IP"':8773/services/Admin' --internalurl 'http://'"$HOST_IP"':8773/services/Cloud'

    ;;

    network)

    keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$NETWORK_EXT_HOST_IP"':9696/' --adminurl 'http://'"$NETWORK_HOST_IP"':9696/' --internalurl 'http://'"$NETWORK_HOST_IP"':9696/'

    ;;

  esac

}


for i in compute volume image object-store identity ec2 network; do

  id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type='"$i"';"` || exit 1

  create_endpoint $i $id

done


# keystone 접근 어드민 

vi creds

unset http_proxy

unset https_proxy

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin_pass

export OS_AUTH_URL="http://controller:5000/v2.0/"


source creds

keystone user-list


[ Glance 설치 ]

sudo apt-get install -y glance

sudo rm /var/lib/glance/glance.sqlite

sudo service glance-api status

sudo service glance-registry status


sudo vi /etc/glance/glance-api-paste.ini

[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

delay_auth_decision = true

auth_host = 192.168.230.141

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = service_pass


sudo vi /etc/glance/glance-registry-paste.ini

[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

auth_host = 192.168.230.141

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = service_pass


sudo vi /etc/glance/glance-api.conf

sql_connection = mysql://glance:임시 패스워드@192.168.230.141/glance

enable_v1_api = True

enable_v2_api = True


[paste_deploy]

flavor=keystone


sudo vi /etc/glance/glance-registry.conf

sql_connection = mysql://glance:임시 패스워드@192.168.230.141/glance


[paste_deploy]

flavor=keystone


sudo glance-manage db_sync

sudo service glance-registry restart

sudo service glance-api restart


[ Image 등록 ]

mkdir images

cd images

wget --no-check-certificate https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

glance image-create --name cirros --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.0-x86_64-disk.img

glance image-list


[ Nova-api, scheduler 설치 ]

sudo apt-get install -y nova-api nova-scheduler nova-cert novnc nova-consoleauth nova-novncproxy nova-doc nova-conductor


mysql -uroot -p임시 패스워드 -e 'CREATE DATABASE nova;'

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '임시 패스워드';"

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '임시 패스워드';"


sudo vi /etc/nova/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = nova

   admin_password = service_pass

   signing_dir = /tmp/keystone-signing-nova

   # Workaround for https://bugs.launchpad.net/nova/+bug/1154809

   auth_version = v2.0



sudo vi /etc/nova/nova.conf


[DEFAULT]

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/run/lock/nova

verbose=True

api_paste_config=/etc/nova/api-paste.ini

compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler

rabbit_host=192.168.230.141

nova_url=http://192.168.230.141:8774/v1.1/

sql_connection=mysql://nova:imsi00@192.168.230.141/nova

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf


# Auth

use_deprecated_auth=false

auth_strategy=keystone


# Imaging service

glance_api_servers=192.168.230.141:9292

image_service=nova.image.glance.GlanceImageService


# Vnc configuration

novnc_enabled=true

novncproxy_base_url=http://192.168.75.141:6080/vnc_auto.html

novncproxy_port=6080

vncserver_proxyclient_address=192.168.230.141

vncserver_listen=0.0.0.0


# Network settings

network_api_class=nova.network.quantumv2.api.API

quantum_url=http://192.168.230.143:9696

quantum_auth_strategy=keystone

quantum_admin_tenant_name=service

quantum_admin_username=quantum

quantum_admin_password=service_pass

quantum_admin_auth_url=http://192.168.230.141:35357/v2.0

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver


#Metadata

service_quantum_metadata_proxy = True

quantum_metadata_proxy_shared_secret = helloOpenStack

metadata_host = 192.168.230.141

metadata_listen = 127.0.0.1

metadata_listen_port = 8775


# Compute #

compute_driver=libvirt.LibvirtDriver


# Cinder #

volume_api_class=nova.volume.cinder.API

osapi_volume_listen_port=5900


sudo nova-manage db sync


# restart nova services

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done


# check nova services

nova-manage service list


[ Horizon 설치 ]

sudo apt-get install -y openstack-dashboard memcached


# ubuntu 테마 삭제

sudo apt-get purge openstack-dashboard-ubuntu-them


# apache and mecached reload

sudo service apache2 restart

sudo service memcached restart


# browser 접속 url

http://192.168.75.141/horizon/


##################   Cinder 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ network 세팅 ]


[ hostname 변경 ]


[ Cinder  설치 ]

sudo apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms

sudo sed -i 's/false/true/g' /etc/default/iscsitarget

sudo vi /etc/iscsi/iscsid.conf

   node.startup = automatic

sudo service iscsitarget start

sudo service open-iscsi start


mysql -uroot -p임시 패스워드 -e 'CREATE DATABASE cinder;'

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '임시 패스워드';"

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '임시 패스워드';"


sudo vi /etc/cinder/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystone.middleware.auth_token:filter_factory

   service_protocol = http

   service_host = 192.168.75.141

   service_port = 5000

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = cinder

   admin_password = service_pass


sudo vi /etc/cinder/cinder.conf

   [DEFAULT]

   rootwrap_config=/etc/cinder/rootwrap.conf

   sql_connection = mysql://cinder:임시 패스워드@192.168.230.141/cinder

   api_paste_config = /etc/cinder/api-paste.ini

   iscsi_helper=ietadm

   volume_name_template = volume-%s

   volume_group = cinder-volumes

   verbose = True

   auth_strategy = keystone

   rabbit_host = 192.168.230.141


sudo cinder-manage db sync


[ cinder volume 생성 ]

dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=10G

sudo losetup /dev/loop2 cinder-volumes

sudo fdisk /dev/loop2


1. sudo fdisk -l

2. sudo fdisk /dev/sdb

3. Press ‘n' to create a new disk partition,

4. Press 'p' to create a primary disk partition,

5. Press '1' to denote it as 1st disk partition,

6. Either press ENTER twice to accept the default of 1st and last cylinder – to convert the remainder of hard disk to a single disk partition

   -OR- press ENTER once to accept the default of the 1st, and then choose how big you want the partition to be by specifying +size{K,M,G} 

   e.g. +5G or +6700M.

7. Press 't', then select the new partition you made.

8. Press '8e' change your new partition to 8e, i.e. Linux LVM partition type.

9. Press ‘p' to display the hard disk partition setup. Please take note that the first partition is denoted as /dev/sda1 in Linux.

10. Press 'w' to write the partition table and exit fdisk upon completion.


sudo pvcreate /dev/loop2

sudo vgcreate cinder-volumes /dev/loop2


# 서버 reboot 시에도 자동으로 설정

sudo vi /etc/init.d/cinder-setup-backing-file

losetup /dev/loop2 /home/nova/cinder-volumes

exit 0


sudo chmod 755 /etc/init.d/cinder-setup-backing-file

sudo ln -s /etc/init.d/cinder-setup-backing-file /etc/rc2.d/S10cinder-setup-backing-file


# restart cinder services

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done


# verify cinder services

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done



##################   Quantum Server 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ network 세팅 ]


[ hostname 변경 ]


[ quantum server 설치 ]

sudo apt-get install -y quantum-server

sudo rm -rf /var/lib/quantum/ovs.sqlite


mysql -uroot -p임시 패스워드 -e 'CREATE DATABASE quantum;'

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'%' IDENTIFIED BY '임시 패스워드';"

mysql -uroot -p임시 패스워드 -e "GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY '임시 패스워드';"


sudo vi /etc/quantum/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystone.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


sudo vi /etc/quantum/quantum.conf

   rabbit_host = 192.168.230.141


sudo service quantum-server restart

sudo service quantum-server status


##################   Quantum Network 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ eth2 vm 용 public 망 추가 - Quantum public network 로 사용 ]

sudo vi /etc/network/interfaces


auto lo

iface lo inet loopback


# host public 망

auto eth0

iface eth0 inet static

      address 192.168.75.144

      netmask 255.255.255.0

      gateway 192.168.75.2

      dns-nameservers 8.8.8.8, 8.8.4.4


# vm private 망, host private 망

auto eth1

iface eth1 inet static

      address 192.168.230.144

      netmask 255.255.255.0


# vm public 망

auto eth2

iface eth2 inet manual

      up ifconfig $IFACE 0.0.0.0 up

      up ip link set $IFACE promisc on

      down ip link set $IFACE promisc off

      down ifconfig $IFACE down


sudo service networking restart


[ hostname 변경 ]


[ openVSwitch 설치 ]

sudo apt-get install -y openvswitch-switch openvswitch-datapath-dkms


# bridge 생성

sudo ovs-vsctl add-br br-int

sudo ovs-vsctl add-br br-ex


[ Quantum openVSwitch agent, dnsmasq, dhcp agent, L3 agent, metadata agent 설치 ]

sudo apt-get install -y quantum-plugin-openvswitch-agent dnsmasq quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent


sudo vi /etc/quantum/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


sudo vi /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

   [DATABASE]

   sql_connection = mysql://quantum:임시 패스워드@192.168.230.141/quantum


   [OVS]

   tenant_network_type = gre

   enable_tunneling = True

   tunnel_id_ranges = 1:1000

   integration_bridge = br-int

   tunnel_bridge = br-tun

   local_ip = 192.168.230.144


sudo vi /etc/quantum/l3_agent.ini

   # 맨 아랫줄에 추가

   auth_url = http://192.168.230.141:35357/v2.0

   auth_region = RegionOne

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


sudo vi /etc/quantum/metadata_agent.ini

   auth_url = http://192.168.230.141:35357/v2.0

   auth_region = RegionOne

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass


   nova_metadata_ip = 192.168.230.141

   nova_metadata_port = 8775

   metadata_proxy_shared_secret = helloOpenStack


sudo vi /etc/quantum/quantum.conf

   rabbit_host = 192.168.230.141


# restart Quantum services

cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done


# br-ex 와 public 망과 연결

sudo ovs-vsctl add-port br-ex eth2



##################   Compute 설치하기   #####################


[ ntp 세팅 ]

sudo vi /etc/ntp.conf

   server 192.168.230.141

sudo service ntp restart


[ network 세팅 ]


[ hostname 변경 ]


[ openVSwitch 설치 ]

sudo apt-get install -y openvswitch-switch openvswitch-datapath-dkms


# bridge 생성

sudo ovs-vsctl add-br br-int


[ Quantum openVSwitch agent 설치 ]

sudo apt-get install -y quantum-plugin-openvswitch-agent


sudo vi /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

   [DATABASE]

   sql_connection = mysql://quantum:imsi00@192.168.230.141/quantum


   [OVS]

   tenant_network_type = gre

   enable_tunneling = True

   tunnel_id_ranges = 1:1000

   integration_bridge = br-int

   tunnel_bridge = br-tun

   local_ip = 192.168.230.145


sudo vi /etc/quantum/quantum.conf

   rabbit_host = 192.168.230.141


   [keystone_authtoken]  ----> ? 필요한 세팅인가?

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = quantum

   admin_password = service_pass

   signing_dir = /var/lib/quantum/keystone-signing


# quantum openVSwitch agent restart

sudo service quantum-plugin-openvswitch-agent restart


[ Nova  설치 ]

sudo apt-get install -y nova-compute-kvm open-iscsi


sudo vi /etc/nova/api-paste.ini

   [filter:authtoken]

   paste.filter_factory = keystone.middleware.auth_token:filter_factory

   auth_host = 192.168.230.141

   auth_port = 35357

   auth_protocol = http

   admin_tenant_name = service

   admin_user = nova

   admin_password = service_pass

   signing_dir = /tmp/keystone-signing-nova

   # Workaround for https://bugs.launchpad.net/nova/+bug/1154809

   auth_version = v2.0


sudo vi /etc/nova/nova-compute.conf

   [DEFAULT]

   libvirt_type=kvm

   libvirt_ovs_bridge=br-int

   libvirt_vif_type=ethernet

   libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

   libvirt_use_virtio_for_bridges=True


sudo vi /etc/nova/nova.conf

   [DEFAULT]

   logdir=/var/log/nova

   state_path=/var/lib/nova

   lock_path=/run/lock/nova

   verbose=True

   api_paste_config=/etc/nova/api-paste.ini

   compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler

   rabbit_host=192.168.230.141

   nova_url=http://192.168.230.141:8774/v1.1/

   sql_connection=mysql://nova:imsi00@192.168.230.141/nova

   root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf


   # Auth

   use_deprecated_auth=false

   auth_strategy=keystone


   # Imaging service

   glance_api_servers=192.168.230.141:9292

   image_service=nova.image.glance.GlanceImageService


   # Vnc configuration

   novnc_enabled=true

   novncproxy_base_url=http://192.168.75.141:6080/vnc_auto.html

   novncproxy_port=6080

   vncserver_proxyclient_address=192.168.230.141

   vncserver_listen=0.0.0.0


   # Network settings

   network_api_class=nova.network.quantumv2.api.API

   quantum_url=http://192.168.230.141:9696

   quantum_auth_strategy=keystone

   quantum_admin_tenant_name=service

   quantum_admin_username=quantum

   quantum_admin_password=service_pass

   quantum_admin_auth_url=http://192.168.230.141:35357/v2.0

   libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

   linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

   firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver


   #Metadata

   service_quantum_metadata_proxy = True

   quantum_metadata_proxy_shared_secret = helloOpenStack

   metadata_host = 192.168.230.141

   metadata_listen = 127.0.0.1

   metadata_listen_port = 8775


   # Compute #

   compute_driver=libvirt.LibvirtDriver


   # Cinder #

   volume_api_class=nova.volume.cinder.API

   osapi_volume_listen_port=5900


# restart nova service

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done


# nova service status

nova-manage service list



[ Nova 명령어 실행 ]

# admin 권한으로 실행

source creds


# tenant, user 생성

keystone tenant-create --name myproject

keystone role-list

keystone user-create --name=myuser --pass=임시 패스워드 --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 --email=myuser@domain.com

keystone user-role-add --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 --user-id 29736a14d7d4471fa50ca04da38d89b1 --role-id 022cd675521b45ffb94693e7cab07db7


# Network 생성

quantum net-create --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 net_myproject

quantum net-list


# Network 에 internal private subnet 생성

quantum subnet-create --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 --name net_myproject_internal net_myproject 10.0.0.0/24


# Router 생성

quantum router-create --tenant-id d8eca2f95bbf4ddc8bda878fe9669661 net_myproject_router


# L3 agent 를 Router 와 연결

quantum l3-agent-router-add 829f424b-0879-4fee-a373-84c0f0bcbb9b net_myproject_router


# Router 를 Subnet 에 연결

quantum router-interface-add f3e2c02e-2146-4388-b415-c95d45f4f3a3 99189c7b-50cd-4353-9358-2dd74efbb762


# restart quantum services

cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done


# 환경설정파일 생성

vi myproject

export OS_TENANT_NAME=myproject

export OS_USERNAME=myuser

export OS_PASSWORD=임시 패스워드

export OS_AUTH_URL="http://192.168.230.141:5000/v2.0/"


# project 권한으로 진행

source myproject








nova image-list

nova secgroup-list

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

ssh-keygen

nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey

nova keypair-list

nova flavor-list

nova boot test01 --flavor 1 --image 5c4c2339-55bd-4e9b-86cb-23694e3b9b17 --key_name mykey --security_group default


nova floating-ip-list

nova floating-ip-create

nova add-floating-ip 80eb7545-258e-4f26-a842-c1993cb03ae5 192.168.75.225

nova remove-floating-ip 80eb7545-258e-4f26-a842-c1993cb03ae5 192.168.75.225

nova floating-ip-delete 192.168.75.225


nova volume-list

nova volume-create --display_name ebs01 1

nova volume-attach 80eb7545-258e-4f26-a842-c1993cb03ae5 c209e2f1-5ff7-496c-8928-d57487d86c6f /dev/vdb

nova volume-detach 80eb7545-258e-4f26-a842-c1993cb03ae5 a078f20a-62c6-432c-8fa2-7cfd9950a64f

nova volume-delete a078f20a-62c6-432c-8fa2-7cfd9950a64f


# 접속 후 ext4 로 format 및 mount

mke2fs -t ext4 /dev/vdb

mount /dev/vdb /test



[ vnc console 접속 ]

nova get-vnc-console 80eb7545-258e-4f26-a842-c1993cb03ae5 novnc





Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

1. EBS Backup

    - EBS를 Snapshot 방식이 아닌 File 방식을 사용하여 Incremental 하게 Backup 하는 서비스


2. Live Migration Boot From Volume

    - iSCSI 기반의 Root 로 Boot 된 VM 을 Live Migration 하는 방식


3. ENI (Elastic Network Interface)

    -  License 가 Mac Address 기반으로 적용되는 경우 Virtual Interface Pool을 만들고, 

        추후 인스턴스 생성 시 지정하여 사용하는 방식


4. Flavor Type 별 Network QoS

    - Instance의 Network Bandwidth를 Flavor Type별로 지정할 수 있는 서비스



Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

[ OpenStack Contribution List ]


1. Flavor Type 별 Network QoS 지정


2. Task 별 API 체크


3. Scheduler

    - Filter 와 가중치를 적용하는 방법


4. EBS Backup

    - EBS를 Snapshot 방식이 아닌 File 방식을 사용하여 Incremental하게  Backup을 하는 서비스


5. ENI (Elastic Network Interface)


6. Project to Host Filter Scheduling


7. EBS 기반으로 Boot 된 VM 에 대한 Live Migration



Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

nova variable

OpenStack/Nova 2012. 7. 21. 17:33
vi /usr/lib/python2.7/json/encoder.py

import datetime           (4 line 추가)
...
...
elif isinstance(o, datetime.datetime):   (431 line 추가)
    pass
elif o.__module__.startswith('nova'):
    yield str(o)

Json 으로 변환하여 print 하기
import json
import nova.openstack.common import jsonutils  (json 혹은 jsonutils 사용)
...
LOG.debug("image_service = %s", jsonutils.dumps(jsonutils.to_primitive(vars(image_service)), indent=2))


nova.api.openstack.compute.servers.py >> Controller >> create()

inst_type = {
  "memory_mb": 512,
  "root_gb": 0,
  "deleted_at": null,
  "name": "m1.tiny",
  "deleted": false,
  "created_at": null,
  "ephemeral_gb": 0,
  "updated_at": null,
  "disabled": false,
  "vcpus": 1,
  "extra_specs": {},
  "swap": 0,
  "rxtx_factor": 1.0,
  "is_public": true,
  "flavorid": "1",
  "vcpu_weight": null,
  "id": 2
}
image_href = "5c4c2339-55bd-4e9b-86cb-23694e3b9b17"
display_name = "test02"
display_description = "test02"
key_name = "mykey"
metadata = {}
access_ip_v4 = null
access_ip_v6 = null
injected_files = []
admin_password = "TbvbCd2NgA5S"
min_count = 1
max_count = 1
requested_networks = [
  [
    "0802c791-d4aa-473b-94a8-46d2b4aff91b",
    "192.168.100.5"
  ]
]
security_group = [
  "default"
]
user_data = null
availability_zone = null
config_drive = null
block_device_mapping = []
auto_disk_config = null
scheduler_hints = {}



nova.compute.api.py >> API >> _create_instance()

[ DB 신규 row 입력 ]

create_db_entry_for_new_instance

image_service = <nova.image.glance.GlanceImageService object at 0x588c450>

image_id = "5c4c2339-55bd-4e9b-86cb-23694e3b9b17"

image = {
  "status": "active",
  "name": "tty-linux",
  "deleted": false,
  "container_format": "ami",
  "created_at": ,
  "disk_format": "ami",
  "updated_at": ,
  "id": "5c4c2339-55bd-4e9b-86cb-23694e3b9b17",
  "owner": "2ffae825c88b448bad4ef4d14f5c1204",
  "min_ram": 0,
  "checksum": "10047a119149e08fb206eea89832eee0",
  "min_disk": 0,
  "is_public": false,
  "deleted_at": null,
  "properties": {
    "kernel_id": "f14c0936-e591-4291-901f-239bc41fd3d6",
    "ramdisk_id": "cc111638-8590-4b5b-8759-f551017ea269"
  },
  "size": 25165824
}

context = {
  "project_name": "service",
  "user_id": "fa8ecb2a7110435daa10a5e9e459c7ca",
  "roles": [
    "admin",
    "member"
  ],
  "_read_deleted": "no",
  "timestamp": "2012-12-26T14:49:00.820425",
  "auth_token": "1f31ccc31d324ba88802826270772522",
  "remote_address": "192.168.75.137",
  "quota_class": null,
  "is_admin": true,
  "service_catalog": [
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:8776/v1/2ffae825c88b448bad4ef4d14f5c1204/v2.0",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:8776/v1/2ffae825c88b448bad4ef4d14f5c1204",
          "id": "82d6c5ae2899473c8aa77bd2ae99881b",
          "internalURL": "http://192.168.75.137:8776/v1/2ffae825c88b448bad4ef4d14f5c1204"
        }
      ],
      "type": "volume",
      "name": "volume"
    },
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:9292/v1",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:9292/v1",
          "id": "2e65219ddb4143b9b0a89c334a5177dc",
          "internalURL": "http://192.168.75.137:9292/v1"
        }
      ],
      "type": "image",
      "name": "glance"
    },
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:8774/v2/2ffae825c88b448bad4ef4d14f5c1204",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:8774/v2/2ffae825c88b448bad4ef4d14f5c1204",
          "id": "0e82d644a5cb47b1890f81bf67b43dec",
          "internalURL": "http://192.168.75.137:8774/v2/2ffae825c88b448bad4ef4d14f5c1204"
        }
      ],
      "type": "compute",
      "name": "nova"
    },
    {
      "endpoints_links": [],
      "endpoints": [
        {
          "adminURL": "http://192.168.75.137:35357/v2.0",
          "region": "RegionOne",
          "publicURL": "http://192.168.75.137:5000/v2.0",
          "id": "2d85bf25bb7e4e6a82efa67063d51ac1",
          "internalURL": "http://192.168.75.137:5000/v2.0"
        }
      ],
      "type": "identity",
      "name": "keystone"
    }
  ],
  "request_id": "req-bda14315-16de-4b23-8d53-24745f87fdad",
  "instance_lock_checked": false,
  "project_id": "2ffae825c88b448bad4ef4d14f5c1204",
  "user_name": "admin"
}

request_spec = {
  "block_device_mapping": [],
  "image": {
    "status": "active",
    "name": "tty-linux",
    "deleted": false,
    "container_format": "ami",
    "created_at": "2012-11-30T07:51:06.000000",
    "disk_format": "ami",
    "updated_at": "2012-11-30T07:51:07.000000",
    "properties": {
      "kernel_id": "f14c0936-e591-4291-901f-239bc41fd3d6",
      "ramdisk_id": "cc111638-8590-4b5b-8759-f551017ea269"
    },
    "min_disk": 0,
    "min_ram": 0,
    "checksum": "10047a119149e08fb206eea89832eee0",
    "owner": "2ffae825c88b448bad4ef4d14f5c1204",
    "is_public": false,
    "deleted_at": null,
    "id": "5c4c2339-55bd-4e9b-86cb-23694e3b9b17",
    "size": 25165824
  },
  "instance_type": {
    "memory_mb": 512,
    "root_gb": 0,
    "deleted_at": null,
    "name": "m1.tiny",
    "deleted": false,
    "created_at": null,
    "ephemeral_gb": 0,
    "updated_at": null,
    "disabled": false,
    "vcpus": 1,
    "extra_specs": {},
    "swap": 0,
    "rxtx_factor": 1.0,
    "is_public": true,
    "flavorid": "1",
    "vcpu_weight": null,
    "id": 2
  },
  "instance_properties": {
    "vm_state": "building",
    "availability_zone": null,
    "ramdisk_id": "cc111638-8590-4b5b-8759-f551017ea269",
    "instance_type_id": 2,
    "user_data": null,
    "vm_mode": null,
    "reservation_id": "r-sviqmkvr",
    "user_id": "fa8ecb2a7110435daa10a5e9e459c7ca",
    "display_description": "test02",
    "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDPrhT0VICqukep0Zl3lz+ZvzZOKVwBEa9IFk2rUcDnjse9zGPy9bZHorEoGYwiywOTTC+Q422rIhAJQvev7OKF4qViyndbLPrlZudeA7oFBc2I0rqUmSwrmQv1Pz4h8jrMdgelgWS1QDPgyFp3O72sS9wP0yQMZIneSdLIV2SxrxVxsISYL5GhbF/A7G9ejSRmLoZgQoDmDW+CtIHFX8EsDDC9K94Dz9F3UCMZwCGGRO4S2o+wValsAuE0xLUF8U6VJ86NrILEJYvNVXPeKyQl9Ktuow0LWqjxtnLv78R/5ayKff+bX/7cekNzG8yeTog7it4kdKaitIb+G5j+h7T nova@ubuntu\n",
    "power_state": 0,
    "progress": 0,
    "project_id": "2ffae825c88b448bad4ef4d14f5c1204",
    "config_drive": "",
    "ephemeral_gb": 0,
    "access_ip_v6": null,
    "access_ip_v4": null,
    "kernel_id": "f14c0936-e591-4291-901f-239bc41fd3d6",
    "key_name": "mykey",
    "display_name": "test02",
    "config_drive_id": "",
    "architecture": null,
    "root_gb": 0,
    "locked": false,
    "launch_time": "2012-12-26T14:42:55Z",
    "memory_mb": 512,
    "vcpus": 1,
    "image_ref": "5c4c2339-55bd-4e9b-86cb-23694e3b9b17",
    "root_device_name": null,
    "auto_disk_config": null,
    "os_type": null,
    "metadata": {}
  },
  "security_group": [
    "default"
  ],
  "instance_uuids": [
    "55c4f897-11a7-457b-9b70-c8ef28549711"
  ]
}

admin_password = "5godsYKky8AR"
injected_files = []
requested_networks = [
  [
    "0802c791-d4aa-473b-94a8-46d2b4aff91b",
    "192.168.100.5"
  ]
]
filter_properties = {
  "scheduler_hints": {}
}


nova.sheduler.filter_scheduler.py >> FilterScheduler >> schedule_run_instance()


nova.compute.manager.py >> ComputeManager >> _run_instance()

request_spec = {

  "block_device_mapping": [],

  "image": {

    "status": "active",

    "name": "tty-linux",

    "deleted": false,

    "container_format": "ami",

    "created_at": "2012-12-16T10:37:48.000000",

    "disk_format": "ami",

    "updated_at": "2012-12-16T10:37:49.000000",

    "properties": {

      "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

      "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78"

    },

    "min_disk": 0,

    "min_ram": 0,

    "checksum": "10047a119149e08fb206eea89832eee0",

    "owner": "0c74b5d96202433196af2faa9bff4bde",

    "is_public": false,

    "deleted_at": null,

    "id": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

    "size": 25165824

  },

  "instance_type": {

    "memory_mb": 512,

    "root_gb": 0,

    "deleted_at": null,

    "name": "m1.tiny",

    "deleted": false,

    "created_at": null,

    "ephemeral_gb": 0,

    "updated_at": null,

    "disabled": false,

    "vcpus": 1,

    "extra_specs": {},

    "swap": 0,

    "rxtx_factor": 1.0,

    "is_public": true,

    "flavorid": "1",

    "vcpu_weight": null,

    "id": 2

  },

  "instance_properties": {

    "vm_state": "building",

    "availability_zone": null,

    "launch_time": "2012-12-24T16:45:50Z",

    "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

    "instance_type_id": 2,

    "user_data": null,

    "vm_mode": null,

    "reservation_id": "r-gzio9556",

    "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

    "display_description": "test01",

    "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiyiud+EmmdRZ50aPPbC7Ys3Td19qp6q3Xnl+W8aFHJ21IbdnCNXZo3pXpeTJy8rvFTitYxpvD5WzGlmPdXoEryJibA6hbPg6hPLINul+SwtuXlqv6pucy+eMVuWhi9MfOKv/uuJpCFIwZuEHGHg3xeW6uVyWSURW9FGH/E6tKdGrB9T2afkPaROOBnK2BRy3Bj55ExZq8qjfsYKDibwoDPddW9rR5zRn7N3pY6rhnULjyWJAd7Ll3UltKMkl3V2BZV0cyvd3c+TMtVtaa8hE9ComrxKOucd84d2+dOyUaV8hr3N3sfe/oXnvlK23Uo9TKwmYfXvTykOtAtaYRss/z nova@folsom\n",

    "power_state": 0,

    "progress": 0,

    "project_id": "0c74b5d96202433196af2faa9bff4bde",

    "config_drive": "",

    "ephemeral_gb": 0,

    "access_ip_v6": null,

    "access_ip_v4": null,

    "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

    "key_name": "mykey",

    "display_name": "test01",

    "config_drive_id": "",

    "architecture": null,

    "root_gb": 0,

    "locked": false,

    "launch_index": 0,

    "memory_mb": 512,

    "vcpus": 1,

    "image_ref": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

    "root_device_name": null,

    "auto_disk_config": null,

    "os_type": null,

    "metadata": {}

  },

  "security_group": [

    "default"

  ],

  "instance_uuids": [

    "1be889ba-fe3b-4eb6-8730-157db1582f88"

  ]

}


filter_properties = {

  "config_options": {},

  "limits": {

    "memory_mb": 3000.0

  },

  "request_spec": {

    "block_device_mapping": [],

    "image": {

      "status": "active",

      "name": "tty-linux",

      "deleted": false,

      "container_format": "ami",

      "created_at": "2012-12-16T10:37:48.000000",

      "disk_format": "ami",

      "updated_at": "2012-12-16T10:37:49.000000",

      "properties": {

        "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

        "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78"

      },

      "min_disk": 0,

      "min_ram": 0,

      "checksum": "10047a119149e08fb206eea89832eee0",

      "owner": "0c74b5d96202433196af2faa9bff4bde",

      "is_public": false,

      "deleted_at": null,

      "id": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

      "size": 25165824

    },

    "instance_type": {

      "memory_mb": 512,

      "root_gb": 0,

      "deleted_at": null,

      "name": "m1.tiny",

      "deleted": false,

      "created_at": null,

      "ephemeral_gb": 0,

      "updated_at": null,

      "disabled": false,

      "vcpus": 1,

      "extra_specs": {},

      "swap": 0,

      "rxtx_factor": 1.0,

      "is_public": true,

      "flavorid": "1",

      "vcpu_weight": null,

      "id": 2

    },

    "instance_properties": {

      "vm_state": "building",

      "availability_zone": null,

      "launch_time": "2012-12-24T16:45:50Z",

      "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

      "instance_type_id": 2,

      "user_data": null,

      "vm_mode": null,

      "reservation_id": "r-gzio9556",

      "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

      "display_description": "test01",

      "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiyiud+EmmdRZ50aPPbC7Ys3Td19qp6q3Xnl+W8aFHJ21IbdnCNXZo3pXpeTJy8rvFTitYxpvD5WzGlmPdXoEryJibA6hbPg6hPLINul+SwtuXlqv6pucy+eMVuWhi9MfOKv/uuJpCFIwZuEHGHg3xeW6uVyWSURW9FGH/E6tKdGrB9T2afkPaROOBnK2BRy3Bj55ExZq8qjfsYKDibwoDPddW9rR5zRn7N3pY6rhnULjyWJAd7Ll3UltKMkl3V2BZV0cyvd3c+TMtVtaa8hE9ComrxKOucd84d2+dOyUaV8hr3N3sfe/oXnvlK23Uo9TKwmYfXvTykOtAtaYRss/z nova@folsom\n",

      "power_state": 0,

      "progress": 0,

      "project_id": "0c74b5d96202433196af2faa9bff4bde",

      "config_drive": "",

      "ephemeral_gb": 0,

      "access_ip_v6": null,

      "access_ip_v4": null,

      "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

      "key_name": "mykey",

      "display_name": "test01",

      "config_drive_id": "",

      "architecture": null,

      "root_gb": 0,

      "locked": false,

      "launch_index": 0,

      "memory_mb": 512,

      "vcpus": 1,

      "image_ref": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

      "root_device_name": null,

      "auto_disk_config": null,

      "os_type": null,

      "metadata": {}

    },

    "security_group": [

      "default"

    ],

    "instance_uuids": [

      "1be889ba-fe3b-4eb6-8730-157db1582f88"

    ]

  },

  "instance_type": {

    "memory_mb": 512,

    "root_gb": 0,

    "deleted_at": null,

    "name": "m1.tiny",

    "deleted": false,

    "created_at": null,

    "ephemeral_gb": 0,

    "updated_at": null,

    "disabled": false,

    "vcpus": 1,

    "extra_specs": {},

    "swap": 0,

    "rxtx_factor": 1.0,

    "is_public": true,

    "flavorid": "1",

    "vcpu_weight": null,

    "id": 2

  },

  "retry": {

    "num_attempts": 1,

    "hosts": [

      "folsom"

    ]

  },

  "scheduler_hints": {}

}


requested_networks[

  [

    "0802c791-d4aa-473b-94a8-46d2b4aff91b",

    "192.168.100.5"

  ]

]

injected_files = []

admin_password = "6Ty7wZA9wc5w"

is_first_time = true


instance = {

  "vm_state": "building",

  "availability_zone": null,

  "terminated_at": null,

  "ephemeral_gb": 0,

  "instance_type_id": 2,

  "user_data": null,

  "vm_mode": null,

  "deleted_at": null,

  "reservation_id": "r-gzio9556",

  "id": 4,

  "security_groups": [

    {

      "project_id": "0c74b5d96202433196af2faa9bff4bde",

      "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

      "name": "default",

      "deleted": false,

      "created_at": "2012-12-16T11:47:01.000000",

      "updated_at": null,

      "rules": [

        {

          "from_port": 22,

          "protocol": "tcp",

          "deleted": false,

          "created_at": "2012-12-16T11:47:26.000000",

          "updated_at": null,

          "id": 1,

          "to_port": 22,

          "parent_group_id": 1,

          "cidr": "0.0.0.0/0",

          "deleted_at": null,

          "group_id": null

        },

        {

          "from_port": -1,

          "protocol": "icmp",

          "deleted": false,

          "created_at": "2012-12-16T11:47:41.000000",

          "updated_at": null,

          "id": 2,

          "to_port": -1,

          "parent_group_id": 1,

          "cidr": "0.0.0.0/0",

          "deleted_at": null,

          "group_id": null

        }

      ],

      "deleted_at": null,

      "id": 1,

      "description": "default"

    }

  ],

  "disable_terminate": false,

  "root_device_name": null,

  "user_id": "034120010ad64ecfb1eeb2ac5f16854d",

  "uuid": "1be889ba-fe3b-4eb6-8730-157db1582f88",

  "server_name": null,

  "default_swap_device": null,

  "info_cache": {

    "instance_uuid": "1be889ba-fe3b-4eb6-8730-157db1582f88",

    "deleted": false,

    "created_at": "2012-12-24T16:45:50.000000",

    "updated_at": null,

    "network_info": "[]",

    "deleted_at": null,

    "id": 4

  },

  "hostname": "test01",

  "launched_on": null,

  "display_description": "test01",

  "key_data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiyiud+EmmdRZ50aPPbC7Ys3Td19qp6q3Xnl+W8aFHJ21IbdnCNXZo3pXpeTJy8rvFTitYxpvD5WzGlmPdXoEryJibA6hbPg6hPLINul+SwtuXlqv6pucy+eMVuWhi9MfOKv/uuJpCFIwZuEHGHg3xeW6uVyWSURW9FGH/E6tKdGrB9T2afkPaROOBnK2BRy3Bj55ExZq8qjfsYKDibwoDPddW9rR5zRn7N3pY6rhnULjyWJAd7Ll3UltKMkl3V2BZV0cyvd3c+TMtVtaa8hE9ComrxKOucd84d2+dOyUaV8hr3N3sfe/oXnvlK23Uo9TKwmYfXvTykOtAtaYRss/z nova@folsom\n",

  "deleted": false,

  "scheduled_at": "2012-12-24T16:45:50.413093",

  "power_state": 0,

  "default_ephemeral_device": null,

  "progress": 0,

  "project_id": "0c74b5d96202433196af2faa9bff4bde",

  "launched_at": null,

  "config_drive": "",

  "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

  "access_ip_v6": null,

  "access_ip_v4": null,

  "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

  "key_name": "mykey",

  "updated_at": "2012-12-24T16:45:50.441013",

  "host": null,

  "display_name": "test01",

  "task_state": "scheduling",

  "shutdown_terminate": false,

  "root_gb": 0,

  "locked": false,

  "name": "instance-00000004",

  "created_at": "2012-12-24T16:45:50.000000",

  "launch_index": 0,

  "memory_mb": 512,

  "instance_type": {

    "memory_mb": 512,

    "root_gb": 0,

    "name": "m1.tiny",

    "deleted": false,

    "created_at": null,

    "ephemeral_gb": 0,

    "updated_at": null,

    "disabled": false,

    "vcpus": 1,

    "flavorid": "1",

    "swap": 0,

    "rxtx_factor": 1.0,

    "is_public": true,

    "deleted_at": null,

    "vcpu_weight": null,

    "id": 2

  },

  "vcpus": 1,

  "image_ref": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

  "architecture": null,

  "auto_disk_config": null,

  "os_type": null,

  "metadata": []

}


image_meta = {

  "status": "active",

  "name": "tty-linux",

  "deleted": false,

  "container_format": "ami",

  "created_at": "2012-12-16T10:37:48.000000",

  "disk_format": "ami",

  "updated_at": "2012-12-16T10:37:49.000000",

  "properties": {

    "kernel_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78",

    "ramdisk_id": "619a49c6-e653-4ca2-93f0-2e0e8cb50e78"

  },

  "min_disk": 0,

  "min_ram": 0,

  "checksum": "10047a119149e08fb206eea89832eee0",

  "owner": "0c74b5d96202433196af2faa9bff4bde",

  "is_public": false,

  "deleted_at": null,

  "id": "011a6a61-70fa-470b-a9cc-fbc7753833fb",

  "size": 25165824

}


network_info = [

  {

    "network": {

      "bridge": "br100",

      "subnets": [

        {    

          "ips": [

            {    

              "meta": {},

              "version": 4,

              "type": "fixed",

              "floating_ips": [],

              "address": "192.168.100.2"

            }    

          ],   

          "version": 4,

          "meta": {

            "dhcp_server": "192.168.100.1"

          },   

          "dns": [

            {    

              "meta": {},

              "version": 4,

              "type": "dns",

              "address": "8.8.8.8"

            }    

          ],   

          "routes": [],

          "cidr": "192.168.100.0/24",

          "gateway": {

            "meta": {},

            "version": 4,

            "type": "gateway",

            "address": "192.168.100.1"

          }    

        },   

        {    

          "ips": [],

          "version": null,

          "meta": {

            "dhcp_server": null

          },   

          "dns": [],

          "routes": [],

          "cidr": null,

          "gateway": {

            "meta": {},

            "version": null,

            "type": "gateway",

            "address": null

          }    

        } 

      ],

      "meta": {

        "tenant_id": null,

        "should_create_bridge": true,

        "bridge_interface": "br100"

      },

      "id": "da8b8d70-6522-495a-b9f7-9bfadb931a8f",

      "label": "private"

    },

    "meta": {},

    "id": "fe9cd80f-c807-4869-9933-cafce241ac0e",

    "address": "fa:16:3e:31:f5:00"

  }

]


block_device_info = {

  "block_device_mapping": [],

  "root_device_name": null,

  "ephemerals": [],

  "swap": null

}


injected_files = []


nova.compute.manager.py >> ComputeManager >> _allocate_network()


vm_states = BUILDING

task_states = NETWORKING

expected_task_states = None



    nova.network.api.py >> API >> allocate_for_instance()


    nova.network.manager.py >> NetworkManager >> allocate_for_instance()


    nova.network.manager.py >> NetworkManager >> _allocate_mac_address()


    nova.network.manager.py >> RPCAllocateFixedIP >> _allocate_fixed_ips()


    nova.network.manager.py >> NetworkManager >> get_instance_nw_info()



nova.compute.manager.py >> ComputeManager >> _prep_block_device()


vm_states = BUILDING

task_states = BLOCK_DEVICE_MAPPING


nova.compute.manager.py >> ComputeManager >> _spawn()


[ VM 생성 시작할 때 ]

vm_states = BUILDING

task_states = SPAWNING

expected_task_states = BLOCK_DEVICE_MAPPING


[ 생성 종료된 후 ]

power_state = current_power_state

vm_state = ACTIVE

task_state = None

expected_task_states = SPAWNING


nova.virt.libvirt.driver.py >> LibvirtDriver >> spawn()





















Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

딱딱한 보고서로는 제가 보고 느낀 것을 제대로 보고할 수 없을 것 같아 이렇게 블로그라도 올립니다.


OpenStack Design Summit 뿐만 아니라 여러 회사가 모이는 컨퍼런스는 정말 중요한 것 같습니다.


사실 하나도 모르는 상태에서 관련 컨퍼런스를 참석한다면 아마도 1/3 도 못 얻어갈겁니다.

그러나 어느 정도 관련 기술을 활용하여 프로덕트를 만들고 있는 상태라면 일부 문제에 대한 해답을 찾을 수 있습니다.


1. 새로운 기술, 프로덕트의 장

    - 이번 컨퍼런스에서 주로 논의된 것은 클라우드에 맞게 기존의 네트워크의 개념을 뒤엎는 내용이었습니다.

      현재 우리가 구축한 시스템도 네트워크 측면에서 Scale out 에 문제가 있습니다.

      사실, 국내에서는 아무도 답을 줄 수 없었죠. 그리고 기존 자기 기술의 틀에 갇혀 해답이 없었습니다.

      저희 시스템도 당분간은 문제 없습니다. 고객이 빠르게 증가한다고 하더라도 어느 정도 확장이 가능합니다.

      근본적인 해결책은 아니더라도 말이죠.

      

      이곳에는 그런 문제들을 고민하고 해결하기 위한 많은 업체들이 자신이 주장하는 것을 제품으로 만들어 

      참여합니다. 완벽하지는 않지만 어느 정도는 서비스가 가능한 제품들이죠.

      그리고 그 제품을 팔기 위해 엔지니어들을 논리적으로 설득시킵니다. 물론 설득에 실패하면 제품도

      쓸데없어 지겠지요.


      하지만 중요한 것은 문제를 해결할 수 있는 기술과 개념입니다. 

      그리고 여기서 힌트를 얻어 어느 정도는 우리도 직접 개발이 가능하다는 생각이 듭니다. 

      절반의 문제는 해결할 수 있는 그림이 그려진 것이죠..


      또한 그들의 제품을 보면서 처음에는 구현이 불가능할 것 같은 요구사항을 구현 가능한 아키텍처로까지

      그릴 수 있었습니다.

      Auto Scaling 및 Hybrid Bursting 구현이 그런 경우입니다.

       

2. 향후 주요 개발 내용 및 방법에 대한 논의

     - OpenStack 의 주요 개발 내용 및 방법은 디자인 서밋에서 결정나는 군요.

        사실 어떤 내용들이 개발될지는 어느 정도 결정이 난 상태 같습니다. 하지만 개발자들이 논의해서

        어떻게 진행할지를 결정하는 것은 인상깊었습니다. 애자일처럼 설계서없이 그림으로 논의하고

        바로 실행에 옮기는 거죠. 그것도 온라인으로.. 이것이 오픈소스 커뮤니티를 가능한게 한거 같습니다.

        물론 논의할 때 좀 무시되는 발언들도 있지만, 그래도 모든 것은 열려있으니까.


        문제는 향후 적극적 참여를 위해서는 개발자간 face to face 로 논의하고 서로를 아는 것이 중요합니다.

        일단 내공을 보여주고 나면 서로에 깊은 관심을 갖게 되는 것이 개발자들이니까요..

        그런의미에서 가능하면 많은 개발자가 디자인서밋에 참여해야 합니다.

        그리고 그 방향에 맞추어 개발에 적극 참여하여야 합니다.

        수동적으로 소스만 갖다 쓸거면 아웃사이더밖에 될 수 없습니다.


        문제는 관련 개발자가 저만 왔다는 것이 문제입니다.

        아무래도 제가 전달하면 감흥이 떨어지겠죠? 시너지 효과도 덜할 거고..

        그리고 여기서 만난 개발자들은 저밖에 기억을 못하겠죠.

        관련 시장이 얼마나 큰데.. 그리고 회사에서 얼마를 투자하는데.. 개발자 몇명을 더 못데리고 오다니..

        회사 입장에서는 소탐대실이라고 밖에 할 수 없습니다.


3. 여러 업체를 통한 현재 우리 기술력에 대한 객관적 평가

      - 같은 기술을 도입하는 여러 업체를 통해서 우리의 기술력을 객관적으로 평가할 수 있었습니다.

         우리가 해결한 고민을 질문하는 업체도 있었고, 반대로 우리가 고민한 것을 솔루션으로 만들어 오는

         업체도 있었습니다.

         또한 질문을 통해서 서로가 얼마나 고민했는지를 알 수 있고요..

         제가 판단한 우리의 수준은 중상 정도??

         

마지막으로 이 곳 실리콘밸리에서는 클라우드 관련 인력과 빅데이터 관련 인력이 없습니다.

구글, 페이스북, HP, DELL, Cisco, IBM, 등등 메이저 업체가 다 데려갔죠..

링크드인에 본인을 잘 소개해 보세요.. 실리콘밸리에서 바로 콜이 올겁니다. ^^




Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요

2012 OpenStack Folsom Design Summit 이 San Francisco 에서 열렸습니다.

클라우드 시스템을 구축하는 입장에서 현지에서 우리의 현위치와 방향을 잡을 수 있는 기회였습니다.


이번 디자인서밋의 주요 화두는 Software 기반의 Virtual Network 와 관련된 Quantum 입니다.

거의 이틀에 거쳐서 Quantum 세션을 full 로 잡은 것을 보면 그 중요도도 알 수 있을 것입니다.


Quantum 이 왜 필요한지는 Flat Network 나 VLan 모드를 사용하여 클라우드 시스템을 구축하였다면 다들 몸으로 느끼고 있으실 겁니다. (적어도 저는..)


또 다른 하나는 Hybrid Cloud 입니다.

특히 구축이 거의 불가능할 거 같은 Hybrid Cloud Bursting 에 대한 기본 아키텍처 윤곽이 나왔다는 겁니다.

이 곳에서 여러 회사를 만나면서 힌트를 얻게 되는 군요.


아직 Hybrid Cloud 에 대해서 논의되지는 않았지만 아마도 올 하반기 디자인서밋에서 대두될 것 같습니다.

채택이 안된다면 제가 강력히 주장해야 겠습니다. ^^


디자인서밋은 클라우드 구축을 시작하는 회사도 방향을 잡는 부분에서 도움이 될 것이고, 어느 정도 지식이 있는 상태에서 디자인서밋에 참석하는 회사는 더 많은 것을 얻어갈 거라 생각이 듭니다.





Posted by Kubernetes Korea co-leader seungkyua@gmail.com

댓글을 달아 주세요