반응형

[ Neutron Server 시작 ]

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log


[ nova boot 명령어 ]

nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e


[ vxlan 연결 여부 확인 ]

sudo ovs-ofctl show br-tun


[ controller node 의 neutron.conf 세팅 ]

nova_admin_tenant_id = service    # 이름이 아니라 tenant-id 를 넣어야 함



[ net 을 제거하는 순서 ]

1. router 와 subnet 의 인터페이스 제거

neutron router-interface-delete [router-id] [subnet-id]


2. subnet 삭제

neutron subnet-delete [subnet-id]


3. net 삭제

neutron net-delete [net-id]



[ net 을 생성할 때 vxlan 으로 생성 ]

neutron net-create demo-net --provider:network_type vxlan



[ security rule 등록 ]

neutron security-group-rule-create --protocol icmp --direction ingress default

neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default



[ route gw 등록 ]

route add -net "0.0.0.0/0" gw "10.0.0.1"



[ MTU 세팅 ]

1. /etc/network/interfaces 파일에 세팅

auto eth2

iface eth2 inet static

address 192.168.200.152

netmask 255.255.255.0

mtu 9000


$ sudo ifdown eth2

$ sudo ifup eth2


2. 동적으로 세팅 (리부팅 필요)

ifconfig eth2 mtu 9000

reboot



[ floatingip 추가 ]

$ neutron floatingip-create ext-net

$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]



[ pip 으로 설치할 수 있게 배포판 만들기 ]

$ sudo python setup.py sdist --formats=gztar



[ metadata 서비스 ]

1. metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨.

   compute-node 는 필요없음

   compute 의 vm 이 network node 의 qdhcp 를 gateway로 보고 호출함


2. controller 노드의 /etc/nova/nova.conf 수정

[neutron]

service_metadata_proxy=True


3. network 노드와 compute 노드의 /etc/neutron/metadata_agent.ini 수정

auth_region = regionOne   # RegionOne 으로 쓰면 에러


[ cirros vm 안에서]

$ wget http://169.254.169.254/latest/meta-data/instance-id




cat /etc/nova/nova.conf  | grep -v ^# | grep -v ^$ | grep metadata

cat /etc/neutron/metadata_agent.ini | grep -v ^# | grep -v ^$ | grep metadata

cat /etc/neutron/l3_agent.ini | grep -v ^# | grep -v ^$



[ controller 노드에서 metadata 바로 호출하기 ]

curl \

  -H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \

  -H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \

  -H 'x-instance-id-signature: 80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \

  http://localhost:8775/latest/meta-data


[ x-instance-id-signature 구하기 ]

>>> import hmac

>>> import hashlib

>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()

'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'

>>>


[ neutron server init script ]

1. /etc/init.d/neutron-server 에 파일 복사한 것 삭제


2. sudo vi /etc/init/neutron-server.conf


# vim:set ft=upstart ts=2 et:

description "Neutron API Server"

author "Chuck Short <zulcss@ubuntu.com>"


start on runlevel [2345]

stop on runlevel [!2345]


respawn


chdir /var/run


script

  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server

  exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \

    --config-file /etc/neutron/neutron.conf \

    --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \

    --log-file /var/log/neutron/neutron-server.log $CONF_ARG

end script









반응형
Posted by seungkyua@gmail.com
,
반응형

1. 애가 잘 어울렸으면 좋겠어요.

I just want him to fit it in.


2. 세도나에 살아보니까 정말 많이 바뀐것 같아.

Living in Sedona has been transformational.


3. 제가 아빠와 글로리아를 설득했죠.

I convinced dad and gloria to invite her.


4. 누군가 거만해 하는군. (자신감에 차있군) : 클레어가 결혼식장에서 삼촌들이 자기 엉덩이를 만졌다고 하니.

Somebody's full of herself.


5. 왜 맨날 내가 하는 일마다 뭐라 그래요?

Why are you always on me about everything?


6. 남자애들은 항상 저런 장난을 하지.

Guys pull pranks like that all the time.


7. 그 사건을 들추지 말고,

Instead of drudging up the whole incident,


8. 네가 어떻게 사과할 자리를 마련해 줄 수 있지 않을까. (~의 선구자적 길을 닦다. 상황을 조성하다.)

Maybe you could pave the way so that I can apologize.


9. 좋을 것을 잃고 싶지 않기 때문이죠. (손을 뻗어서 좋을 것을 잡을려고 한다.)

You're reaching out, trying to hold on to something awesome.


10 내 엉덩이를 계속 쓰다듬었다.

They kept patting my butt.



Nana got totally wasted. : 나나는 완전히 술에 취해 있었다.

It was gross : 역겨웠다.

The bride and the groom : 신부와 신랑



반응형
Posted by seungkyua@gmail.com
,

python 정리

OpenStack 2015. 1. 29. 17:43
반응형

[ Software Architecture ]


1. Rest API & Queue

2. Source Directory Structure


[ package ]


1. SQLAlchemy : The Python SQL Toolkit and Object Relational Mapper



[ Programming ]

1. Dynamically importing modules


# import os

def import_module(import_str):

    __import__(import_str)          # ImportError

    return sys.modules[import_str]  # KeyError


os = import_module("os")

os.getcwd()


# import versioned_submodule

module = 'mymodule.%s' % version

module = '.'.join((module, submodule))

import_module(module)


# import class

import_value = "nova.db.sqlalchemy.models.NovaBase"

mod_str, _sep, class_str = import_value.rpartition('.')

novabase_class = getattr(sys.modules[mod_str], class_str)

novabase_class()




2. Data Access Object (DAO)


nova.db.base.Base

    def __init__(...)

        self.db import_module(db_driver)



# 사용 예

Manager(base.Base)

...

self.db.instance_update(...)


self.db 는 Base

db_driver 는 "nova.db" 패키지

nova.db 패키지의 __init__ 은 from nova.db.api import *

그러므로 self.db = nova.db.api


nova.db.api

IMPL = nova.db.sqlalchemy.api

def instance_update(...)

   IMPL.instance_update(...)




3. Configuration 활용


nova.db.sqlalchemy.api 에서 Configration 사용

CONF = cfg.CONF

CONF.compute_topic 과 같이 사용



oslo.config.cfg 패키지

CONF = ConfigOpts()



Opt

name = 이름

type = StrOpt, IntOpt, FloatOpt, BoolOpt,  List,  DictOpt, IPOpt, MultiOpt, MultiStrOpt

dest : ConfigOpts property 와 대응대는 이름

default : 기본값


ConfigOpts(collections.Mapping)

    def __init__(self):

        self._opts = {}     # dict of dicts of (opt:,  override:,  default: )


    def __getattr__(self.name):            # property 가 실제 존재하지 않으면 호출됨

        return self._get(name)



4. Decorator 패턴


def require_admin_context(f):


    def wrapper(*args, **kwargs):

        nova.context.require_admin_context(args[0])

        return f(*args, **kwargs)

    retrun wrapper


@require_admin_context

def service_get_by_compute_host(context, host):

    ...


* Class로 Decorator를 정의할려면 Class를 함수처럼 호출되게 __call()__ 멤버 함수를 정의


GoF 의 Decorator 와는 조금 다름.

GoF Decorator : 상속을 받으면서 Decorate 를 추가하는 방법 (객체지향적)

GoF 의 Template method pattern 과 오히려 더 유사 (하지만 이것 역시 객체지향적으로 좀 다름)


ApectJ 혹은 Spring AOP (Apspect Oriented Programming) 와 유사 (사용법은 python 이 더 간단)


P7YQ9-DJFPK-DTCYM-X6KW2-CGCGT
YTYK3-6JKXT-6HQRG-YRKKP-7RDMF

반응형
Posted by seungkyua@gmail.com
,
반응형

1. 내가 할려고 했는데...

I was gonna get it.


2. 그냥 해 본 소리였어. (장난으로 해본 소리야.)

I was just being facetious.


3. 인쇄를 잘못한 거 일거야.   (오탈자 - 비격식)

I'm pretty sure this is a typo.


4. 그는 속물이다. (고상한 척 [상류층을 동경하는] 사람)

    난 사리분별을 잘한다. (안목이 있는, 통찰력이 있는)

    나는 촌뜨기였다.

    그가 좀 예민하다.

He is a snob.

I'm discerning.

I was a hick.

He's a little jumpy.


5. 내 생각엔 다음 단계로 넘아갈 시기가 온듯한데.

I'm all about taking it to the next level.


6. 그건 그녀가 좋아하는게 아니에요.

    네가 좋아하는건 사람들 약올리는거야.

It's not really her thing.

Your thing is to provoke.


7. 제가 좀 시끄러워질 때 마다,

Every time I would get a little boisterous,


8. 할머니한테 그게 무슨 말버릇이니?

That's no way to talk to your grandmother.


9. 정말 귀찮아. 성질나. 짜증나.

It's just kind of irritating.


10. 나 삐끗한거 같아.

I think I strained something.





go figure : 이해가 안되는군.   ~ 알아보다.

without missing a beat : 즉시, 박자를 놓치지 않고 바로

just past the tires : (코스트코 같은 매장에서) 타이어코너 지나서 바로.
















반응형
Posted by seungkyua@gmail.com
,
반응형

1. 얼마전에 이사왔어요.

She just moved in down the block.


2. 언제 한 번 놀러와요.

We'll have to have you over sometime.


3. 불량배들 한테 놀림 받을까봐 걱정돼. (bully : 약자를 괴롭히는 사람)

I worry about the ridicule he might get from loudmouth bully.


4. 가끔 남자는 단호한 태도를 취하고 남자가 해야할 일을 해야죠.

Sometimes a man's gotta put his foot down and do what a man's gotta do.


5. 농담이야, 진정해 좀.

Kidding. Just chill, please.


6. 헬로우 댄스로 시작하고, 그 담엔 블럭쌓기와 핑거 페인팅을 할거에요.

We're gonna start with the hello dance, and then we're gonna move on to blocks and finger painting.


7. 어떻게 좀 면제(보상) 받을 수 있을까요?

Any chance I could get a break on this one?


8. 와인도시만 생각하자.

Just get me to wine country.


9. 나는 이런 정리가 좋은 줄 아냐?

You think I like this arrangement?


10. 왜 슬금슬금 빠져나가고 뭔가를 숨기는 거야?

Why are you trying to sneak around and hide things from me?





반응형
Posted by seungkyua@gmail.com
,
반응형

1. 진정해.

Let's take it down a notch.


2. 오늘 친구가 우리집에 오기로 했어요.

I'm having a friend over today.


3. 그냥 안 부를래요.

I might as well just tell him not to come.


4. (안 놀랄 수 없어서) 좀 놀라긴 했지만,

I'm bound to be a little surprised,


5. 고의가 아녔어요.

I didn't mean to.


6. 이제 당신이 해결해.

Now you have to follow through.


7. 이게 무슨 뜻이야?

What the hell is that suppose to mean?


8. 아무 것도 아닌 일일 수 도 있다.

It's supposed to be nothing.


9. 열지마요, 내가 할게요.

Don't answer it, I'll get it.


10. 그녀가 혼자 열받아서 뭐라 그러는데 완전 창피해요.

She is like completely freaking out and embarassing me.









반응형
Posted by seungkyua@gmail.com
,
반응형

[ network 구조 ]

eth0  : NAT (Public Network)

eth1  : Host-only (Private Management Network)

eth2  : Host-only (Private Data Network)


controller   : eth0 - 192.168.75.151    eth1 - 192.168.230.151

network     : eth0 - 192.168.75.152    eth1 - 192.168.230.152     eth2 - 192.168.200.152

Compute   : eth0 - 192.168.75.153    eth1 - 192.168.230.153     eth2 - 192.168.200.153


0. Kernel 버전

3.13.0-24-generic 에서 3.13.0.34-generic 으로 업그레이드 되어야 함


1. Host 이름 변경

$ sudo vi /etc/hostname

...

controller

...

$ sudo hostname -F /etc/hostname


$ sudo vi /etc/hosts

...

192.168.230.151 controller

192.168.230.152 network

192.168.230.153 compute


2. ntp 및 로컬타임 세팅

$ sudo apt-get install ntp

$ sudo vi /etc/ntp.conf

...

server time.bora.net

...

$ sudo ntpdate -u time.bora.net

$ sudo ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime

$ sudo service ntp restart


3. User 생성 및 sudo 세팅

# adduser stack

# visudo

...

stack   ALL=(ALL:ALL) NOPASSWD: ALL           # 맨 마지막 줄에 추가


4. ip forward 및 ip spoofing 세팅

sudo vi /etc/sysctl.conf

...

net.ipv4.conf.default.rp_filter=0         # ip spoofing 1 이 막는건가?

net.ipv4.conf.all.rp_filter=0               # ip spoofing 1 이 막는건가?

net.ipv4.ip_forward=1

...

$ sudo sysctl -p


5. 공통 패키지 설치

- 파이썬 pip 라이브러리

- 파이썬 개발 라이브러리

- 파이썬 eventlet 개발 라이브러리

- 파이썬 mysql 라이브러리

- vlan 및 bridge

- lvm                               (Cinder를 위해서)

- OpenVSwtich

- 파이썬 libvirt 라이브러리   (KVM 컨트롤 위해서)

- nbd 커널모듈 로드           (VM disk mount 를 위해서)

- ipset                          (ovs 성능 향상을 위해 ml2 에서 enable_ipset=True 일 때 사용)


sudo apt-get install python-pip

sudo apt-get install python-dev

$ sudo apt-get install libevent-dev

$ sudo apt-get install python-mysqldb

sudo apt-get install vlan bridge-utils

sudo apt-get install lvm2

$ sudo apt-get install openvswitch-switch

$ sudo apt-get install python-libvirt

$ sudo apt-get install nbd-client

$ sudo apt-get install ipset


$ sudo apt-get install python-tox            tox : nova.conf 를 generate 하기위한 툴

$ sudo apt-get install libmysqlclient-dev   tox 로 generate 할 때 mysql config 파일이 필요

$ sudo apt-get install libpq-dev              # tox 로 generate 할 때 pq config 파일이 필요

sudo apt-get install libxml2-dev           # tox 로 generate 할 때 xml parsing 필요

sudo apt-get install libxslt1-dev           tox 로 generate 할 때 xml parsing 필요

sudo apt-get install libvirt-dev             # tox 로 generate 할 때 필요

sudo apt-get install libffi-dev              # tox 로 generate 할 때 필요



[ 서버별 Process 및 Package ]


1. Controller Node 에 뜨는 Process

nova-api

nova-scheduler

nova-conductor

nova-consoleauth

nova-console

nova-novncproxy

nova-cert


neutron-server


2. Network Node Node 에 뜨는 Process

   metadata 서비스 : metadata-agent 와 neutron-ns-metadata-proxy 는 네트워크 노드에 있으면 됨

neutron-l3-agent

neutron-dhcp-agent

neutron-openvswitch-agent

neutron-metadata-agent          # metadata 서비스를 위해서 Network Node 에 필요

neutron-ns-metadata-proxy     # vm 이 network node 의 qdhcp 를 gateway로 보고 호출함 


3. Compute Node 에 뜨는 Process

nova-compute


neutron-l3-agent

neutron-openvswitch-agent



1. Controller Node 에 설치할 Neutron Package

neutron-server

neutron-plugin-ml2


2. Network Node 에 설치할 Neutron Package

neutron-plugin-ml2

neutron-plugin-openvswitch-agent

neutron-l3-agent   (DVR)

neutron-dhcp-agent


3. Compute Node 에 설치할 Neutron Package

neutron-common

neutron-plugin-ml2

neutron-plugin-openvswitch-agent

neutron-l3-agent   (DVR)



###############   controller   ######################


[ RabbitMQ 설치 ]

$ sudo  apt-get install rabbitmq-server

sudo rabbitmqctl change_password guest rabbit


[ MySQL 설치 ]

sudo apt-get install mysql-server python-mysqldb

$ sudo vi /etc/mysql/my.cnf

...

bind-address        = 0.0.0.0

...

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

character_set_filesystem = utf8

...

$ sudo service mysql restart


[ Keystone 설치 ]


1. Keystone package 설치

$ mkdir -p Git

$ cd Git

$ git clone http://git.openstack.org/openstack/keystone.git

$ cd keystone

$ git checkout -b 2014.2.1 tags/2014.2.1


sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .                        # source를 pip 으로 install 하기


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_dbpass';


3. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/keystone

$ sudo chown -R stack.stack /var/log/keystone

sudo mkdir -p /etc/keystone

$ sudo cp ~/Git/keystone/etc/* /etc/keystone/.


$ sudo vi /etc/logrotate.d/openstack

/var/log/keystone/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/nova/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/cinder/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/glance/*.log {

    daily

    rotate 31

    missingok

    dateext

}


/var/log/neutron/*.log {

    daily

    rotate 31

    missingok

    dateext

}


4. conf 복사

sudo chown -R stack.stack /etc/keystone

$ cd /etc/keystone

mv keystone.conf.sample keystone.conf

$ mv logging.conf.sample logging.conf

$ mkdir -p ssl

$ cp -R ~/Git/keystone/examples/pki/certs /etc/keystone/ssl/.

$ cp -R ~/Git/keystone/examples/pki/private /etc/keystone/ssl/.


5. conf 설정

$ sudo vi keystone.conf


[DEFAULT]

admin_token=ADMIN

admin_workers=2

max_token_size=16384

debug=True

logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s% (message)s

logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s

rabbit_host=controller

rabbit_password=rabbit

log_file=keystone.log

log_dir=/var/log/keystone


[catalog]

 driver=keystone.catalog.backends.sql.Catalog


[database]

connection=mysql://keystone:keystone_dbpass@controller/keystone


[identity]

driver=keystone.identity.backends.sql.Identity


[paste_deploy]

config_file=/etc/keystone/keystone-paste.ini


[token]

expiration=7200

driver=keystone.token.persistence.backends.sql.Token


6. keystone 테이블 생성

keystone-manage db_sync


7. init script 등록

$ sudo vi /etc/init/keystone.conf


description "Keystone server"

author "somebody"


start on (filesystem and net-device-up IFACE!=lo)

stop on runlevel [016]


chdir /var/run


exec su -c "keystone-all" stack


$ sudo service keystone start


8. 초기 키스톤 명령을 위한 initrc 생성

$ vi initrc


export OS_SERVICE_TOKEN=ADMIN

export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0


9. tenant, user, role 등록

$ . initrc

keystone tenant-create --name=admin --description="Admin Tenant"

$ keystone tenant-create --name=service --description="Service Tenant"

$ keystone user-create --name=admin --pass=ADMIN --email=admin@example.com

$ keystone role-create --name=admin

$ keystone user-role-add --user=admin --tenant=admin --role=admin


10. Service 등록

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"


11. endpoint 등록

keystone endpoint-create --service=keystone --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0


12. adminrc 생성

unset OS_SERVICE_TOKEN

$ unset OS_SERVICE_ENDPOINT

$ vi adminrc

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN

export OS_TENANT_NAME=admin

export OS_AUTH_URL=http://controller:35357/v2.0


13. keystone conf 파일 리스트

stack@controller:/etc/keystone$ ll

total 104

drwxr-xr-x   3 stack stack  4096 Jan  7 15:53 ./

drwxr-xr-x 137 root  root  12288 Jan  7 17:23 ../

-rw-r--r--   1 stack stack  1504 Jan  7 11:16 default_catalog.templates

-rw-r--r--   1 stack stack 47749 Jan  7 11:51 keystone.conf

-rw-r--r--   1 stack stack  4112 Jan  7 11:16 keystone-paste.ini

-rw-r--r--   1 stack stack  1046 Jan  7 11:16 logging.conf

-rw-r--r--   1 stack stack  8051 Jan  7 11:16 policy.json

-rw-r--r--   1 stack stack 10676 Jan  7 11:16 policy.v3cloudsample.json

drwxrwxr-x   4 stack stack  4096 Jan  7 11:55 ssl/

stack@controller:/etc/keystone$ cd ssl

stack@controller:/etc/keystone/ssl$ ll

total 16

drwxrwxr-x 4 stack stack 4096 Jan  7 11:55 ./

drwxr-xr-x 3 stack stack 4096 Jan  7 15:53 ../

drwxrwxr-x 2 stack stack 4096 Jan  7 11:54 certs/

drwxrwxr-x 2 stack stack 4096 Jan  7 11:55 private/



[ Glance 설치 ]


1. Glance package 설치

git clone http://git.openstack.org/openstack/glance.git

$ cd glance

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

mysql -uroot -pmysql

mysql> CREATE DATABASE glance;

mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_dbpass';


3. service 등록

keystone user-create --name=glance --pass=glance_pass --email=glance@example.com

$ keystone user-role-add --user=glance --tenant=service --role=admin

$ keystone service-create --name=glance --type=image --description="Glance Image Service"

$ keystone endpoint-create --service=glance --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/glance

$ sudo chown -R stack.stack /var/log/glance

sudo mkdir -p /etc/glance

$ sudo cp ~/Git/glance/etc/glance-* /etc/glance/.

$ sudo cp ~/Git/glance/etc/*.json /etc/glance/.

$ sudo cp ~/Git/glance/etc/logging.cnf.sample /etc/glance/logging.cnf


$ sudo mkdir -p /var/lib/glance

sudo chown stack.stack /var/lib/glance

$ mkdir -p /var/lib/glance/images

$ mkdir -p /var/lib/glance/image-cache


5. conf owner 변경

sudo chown -R stack.stack /etc/glance


6. glance-api.conf 설정

$ vi /etc/glance/glance-api.conf


[DEFAULT]

verbose = True

debug = True

rabbit_host = controller

rabbit_password = rabbit

image_cache_dir = /var/lib/glance/image-cache/

delayed_delete = False

scrub_time = 43200

scrubber_datadir = /var/lib/glance/scrubber


[database]

connection = mysql://glance:glance_dbpass@controller/glance


[keystone_authtoken]

identity_uri = http://controller:35357

auth_uri = http://controller:5000/v2.0

admin_tenant_name = service

admin_user = glance

admin_password = glance_pass


[paste_deploy]

flavor=keystone


[glance_store]

filesystem_store_datadir = /var/lib/glance/images/


7. glance-registry.conf 설정

$ vi /etc/glance/glance-registry.conf


[DEFAULT]

verbose = False

debug = False

rabbit_host = controller

rabbit_password = rabbit


[database]

connection = mysql://glance:glance_dbpass@controller/glance


[keystone_authtoken]

identity_uri = http://controller:35357

auth_uri = http://controller:5000/v2.0

admin_tenant_name = service

admin_user = glance

admin_password = glance_pass


[paste_deploy]

flavor=keystone


8. glance 테이블 생성

glance-manage db_sync


9. init script 등록

$ sudo vi /etc/init/glance-api.conf


description "Glance API server"

author "Soren Hansen <soren@linux2go.dk>"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "glance-api" stack


$ sudo service glance-api start


$ sudo vi /etc/init/glance-registry.conf


description "Glance registry server"

author "Soren Hansen <soren@linux2go.dk>"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "glance-registry" stack


10. glance client package 설치

git clone http://git.openstack.org/openstack/python-glanceclient.git

$ cd python-glanceclient

$ git checkout -b 0.15.0 tags/0.15.0

$ sudo pip install -e .


11. Image 등록

wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

$ glance image-create --name cirros-0.3.3 --is-public true --container-format bare --disk-format qcow2 --file cirros-0.3.3-x86_64-disk.img


# Heat 이미지 등록 (from devstack/files)

$ glance image-create --name [Heat]F17-x86_64-cfntools --is-public true --container-format bare --disk-format qcow2 --file F17-x86_64-cfntools.qcow2


# Fedora 이미지 등록 (from devstack/files)

$ glance image-create --name Fedora-x86_64-20-20140618-sda --is-public true --container-format bare --disk-format qcow2 --file Fedora-x86_64-20-20140618-sda.qcow2


# mysql 이미지 등록 (from devstack/files

glance image-create --name mysql --is-public true --container-format bare --disk-format qcow2 --file mysql.qcow2



[ Cinder 설치 ]


1. Cinder package 설치

git clone http://git.openstack.org/openstack/cinder.git

$ cd cinder

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

mysql -uroot -pmysql

mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_dbpass';


3. service 등록

keystone user-create --name=cinder --pass=cinder_pass --email=cinder@example.com

keystone user-role-add --user=cinder --tenant=service --role=admin

$ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"

$ keystone endpoint-create --service=cinder --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s

$ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"

$ keystone endpoint-create --service=cinderv2 --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/cinder

$ sudo chown -R stack.stack /var/log/cinder

sudo mkdir -p /etc/cinder

$ sudo cp -R ~/Git/cinder/etc/cinder/* /etc/cinder/.


5. conf owner 변경

sudo chown -R stack.stack /etc/cinder

$ mv /etc/cinder/cinder.conf.sample /etc/cinder/cinder.conf

$ sudo chown root.root /etc/cinder/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/cinder/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/cinder

sudo chown stack.stack /var/lib/cinder

$ mkdir -p /var/lib/cinder/volumes

$ sudo mkdir -p /var/lock/cinder

sudo chown stack.stack /var/lock/cinder

sudo mkdir -p /var/run/cinder

sudo chown stack.stack /var/run/cinder


6. cinder.conf 설정

$ vi /etc/cinder/cinder.conf


[DEFAULT]

rpc_backend=cinder.openstack.common.rpc.impl_kombu

rabbit_host=controller

rabbit_password=rabbit

api_paste_config=api-paste.ini

state_path=/var/lib/cinder

glance_host=controller

lock_path=/var/lock/cinder

debug=True

verbose=True

rootwrap_config=/etc/cinder/rootwrap.conf

auth_strategy=keystone

volume_name_template=volume-%s

iscsi_helper=tgtadm

volumes_dir=$state_path/volumes

# volume_group=cinder-volumes               # volue-type 에 넣었으므로 제거


enabled_backends=lvm-iscsi-driver

default_volume_type=lvm-iscsi-type


[lvm-iscsi-driver]

volume_group=cinder-volumes

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

san_ip=controller

volume_backend_name=lvm-iscsi


[database]

connection = mysql://cinder:cinder_dbpass@controller/cinder


[keystone_authtoken]

auth_host=controller

auth_port=35357

auth_protocol=http

auth_uri=http://controller:5000

admin_user=cinder

admin_password=cinder_pass

admin_tenant_name=service


7. cinder 테이블 생성

cinder-manage db sync


8. volume 생성

$ mkdir -p ~/cinder-volumes

$ cd cinder-volumes

dd if=/dev/zero of=cinder-volumes-backing-file bs=1 count=0 seek=5G

$ sudo losetup /dev/loop1 /home/stack/cinder-volumes/cinder-volumes-backing-file

sudo fdisk /dev/loop1

n p 1 Enter Enter t 8e w

sudo pvcreate /dev/loop1

sudo vgcreate cinder-volumes /dev/loop1


9. init script 등록

$ sudo vi /etc/init/cinder-api.conf


description "Cinder api server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-api.log" stack


$ sudo service cinder-api start


$ sudo vi /etc/init/cinder-scheduler.conf


description "Cinder scheduler server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-scheduler.log" stack


$ sudo service cinder-scheduler start


$ sudo vi /etc/init/cinder-volume.conf


description "Cinder volume server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-volume.log" stack


$ sudo service cinder-volume start


10. volume type 등록

cinder type-create lvm-iscsi-type

stack@controller:~/cert$ cinder type-key lvm-iscsi-type set volume_backend_name=lvm-iscsi


11. volume 생성

cinder create --display-name test01 --volume-type lvm-iscsi-type 1




[ Nova Controller 설치 ]


1. Nova package 설치

$ git clone http://git.openstack.org/openstack/nova.git

$ cd nova

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


git clone https://github.com/kanaka/novnc.git

sudo cp -R novnc /usr/share/novnc


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';


3. service 등록

$ keystone user-create --name=nova --pass=nova_pass --email=nova@example.com

$ keystone user-role-add --user=nova --tenant=service --role=admin

$ keystone service-create --name=nova --type=compute --description="OpenStack Compute"

$ keystone endpoint-create --service=nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s


4. conf 파일 generation

$ cd ~/Git/nova

$ sudo tox -i http://xxx.xxx.xxx.xxx/pypi/web/simple -egenconfig          # pypi 서버 ip

$ sudo chown stack.stack /home/stack/Git/nova/etc/nova/nova.conf.sample


5. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/nova

$ sudo chown -R stack.stack /var/log/nova

$ sudo mkdir -p /etc/nova

$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.


6. conf owner 변경

$ sudo chown -R stack.stack /etc/nova

$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf

mv /etc/nova/logging_sample.conf logging.conf

$ sudo chown root.root /etc/nova/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/nova/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/nova

$ sudo chown stack.stack /var/lib/nova

$ sudo mkdir -p /var/lock/nova

$ sudo chown stack.stack /var/lock/nova

$ sudo mkdir -p /var/run/nova

$ sudo chown stack.stack /var/run/nova


7. nova conf 설정

$ vi /etc/nova/nova.conf


[DEFAULT]

rabbit_host=controller

rabbit_password=rabbit

rpc_backend=rabbit

my_ip=192.168.230.151

state_path=/var/lib/nova

rootwrap_config=/etc/nova/rootwrap.conf

api_paste_config=api-paste.ini

auth_strategy=keystone

allow_resize_to_same_host=true

network_api_class=nova.network.neutronv2.api.API

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

force_dhcp_release=true

security_group_api=neutron

lock_path=/var/lock/nova

debug=true

verbose=true

log_dir=/var/log/nova

compute_driver=libvirt.LibvirtDriver

firewall_driver=nova.virt.firewall.NoopFirewallDriver

vncserver_listen=192.168.230.151

vncserver_proxyclient_address=192.168.230.151


[cinder]

catalog_info=volume:cinder:publicURL


[database]

connection = mysql://nova:nova_dbpass@controller/nova


[glance]

host=controller


[keystone_authtoken]

auth_uri=http://controller:5000

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova_pass


[libvirt]

use_virtio_for_bridges=true

virt_type=kvm


[neutron]

service_metadata_proxy=True

metadata_proxy_shared_secret=openstack

url=http://192.168.230.151:9696

admin_username=neutron

admin_password=neutron_pass

admin_tenant_name=service

admin_auth_url=http://controller:5000/v2.0

auth_strategy=keystone


8. nova 테이블 생성

nova-manage db sync


9. init script 등록

$ sudo vi /etc/init/nova-api.conf


description "Nova api server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log" stack


$ sudo service nova-api start


$ sudo vi /etc/init/nova-scheduler.conf


description "Nova scheduler server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-scheduler --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack


$ sudo service nova-scheduler start


$ sudo vi /etc/init/nova-conductor.conf


description "Nova conductor server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log" stack


$ sudo service nova-conductor start


$ sudo vi /etc/init/nova-consoleauth.conf


description "Nova consoleauth server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-consoleauth --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-consoleauth.log" stack


$ sudo service nova-consoleauth start



$ sudo vi /etc/init/nova-console.conf


description "Nova console server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-console --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-console.log" stack


$ sudo service nova-console start


$ sudo vi /etc/init/nova-cert.conf


description "Nova cert server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-cert --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-cert.log" stack


$ sudo service nova-cert start


$ sudo vi /etc/init/nova-novncproxy.conf


description "Nova novncproxy server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-novncproxy --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-novncproxy.log" stack


$ sudo service nova-novncproxy start



[ Neutron Controller 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. DB 등록

$ mysql -uroot -pmysql

mysql> CREATE DATABASE neutron;

mysql> GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_dbpass';


3. service 등록

keystone user-create --name=neutron --pass=neutron_pass --email=neutron@example.com

keystone service-create --name=neutron --type=network --description="OpenStack Networking"

keystone user-role-add --user=neutron --tenant=service --role=admin

keystone endpoint-create --service=neutron --publicurl http://controller:9696 --adminurl http://controller:9696  --internalurl http://controller:9696


4. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo mkdir -p /etc/neutron/plugins

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/plugins/ml2 /etc/neutron/plugins/.

$ sudo cp -R ~/Git/neutron/etc/neutron/rootwrap.d/ /etc/neutron/.


5. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


6. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

router_distributed = True

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = 86be..........       # 이름이 아니라 tenant-id를 넣어야 함

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


7. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]

local_ip = 192.168.200.151

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


8. Neutron 테이블 생성

$ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno


9. init script 등록

sudo vi /etc/init/neutron-server.conf


# vim:set ft=upstart ts=2 et:

description "Neutron API Server"

author "Chuck Short <zulcss@ubuntu.com>"


start on runlevel [2345]

stop on runlevel [!2345]


respawn


chdir /var/run


script

  [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server

  exec start-stop-daemon --start --chuid stack --exec /usr/local/bin/neutron-server -- \

    --config-file /etc/neutron/neutron.conf \

    --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini \

    --log-file /var/log/neutron/neutron-server.log $CONF_ARG

end script


Neutron Server 수동 시작

neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log


10. 명령어 확인

neutron ext-list


11. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash

sudo service neutron-server $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart



###############   Network   ######################


[ Neutron Network Node 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .


$ sudo apt-get install dnsmasq


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


4. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = service

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


5. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]

local_ip = 192.168.200.152

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


6. L3 agent conf 설정

$ vi /etc/neutron/l3_agent.ini


[DEFAULT]

debug = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

agent_mode = dvr_snat


7. DHCP agent conf 설정

vi /etc/neutron/dhcp_agent.ini


[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

enable_isolated_metadata = True

enable_metadata_network = True

dhcp_delete_namespaces = True

verbose = True


8. metadata agent conf 설정

$ vi /etc/neutron/metadata_agent.ini


[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne                      # RegionOne 으로 쓰면 에러

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass

nova_metadata_ip = controller

metadata_proxy_shared_secret = openstack

verbose = True


9. Bridge 및 port 생성

$ sudo ovs-vsctl add-br br-ex

$ sudo ovs-vsctl add-port br-ex eth0

$ sudo ovs-vsctl add-br br-tun

$ sudo ovs-vsctl add-port br-tun eth2


10. init script 등록

$ sudo vi /etc/init/neutron-openvswitch-agent.conf


description "Neutron OpenVSwitch Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack


$ sudo service neutron-openvswitch-agent start


$ sudo vi /etc/init/neutron-l3-agent.conf


description "Neutron L3 Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


pre-start script

  # Check to see if openvswitch plugin in use by checking

  # status of cleanup upstart configuration

  if status neutron-ovs-cleanup; then

    start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent

  fi

end script


exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack


$ sudo service neutron-l3-agent start


$ sudo vi /etc/init/neutron-dhcp-agent.conf


description "Neutron dhcp Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-dhcp-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/neutron-dhcp-agent.log" stack


$ sudo service neutron-dhcp-agent start


$ sudo vi /etc/init/neutron-metadata-agent.conf


description "Neutron metadata Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack


$ sudo service neutron-metadata-agent start


11. 설치 확인

$ neutron agent-list


12. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash


sudo service neutron-openvswitch-agent $1

sudo service neutron-dhcp-agent $1

sudo service neutron-metadata-agent $1

sudo service neutron-l3-agent $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart




###############   Compute   ######################


[ Nova Compute Node 설치 ]


1. Nova package 설치

$ git clone http://git.openstack.org/openstack/nova.git

$ cd nova

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install pbr==0.9                # pbr 은 버전설치에 문제가 있어 따로 설치

$ sudo pip install -e .


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/nova

$ sudo chown -R stack.stack /var/log/nova

$ sudo mkdir -p /etc/nova

$ sudo cp -R ~/Git/nova/etc/nova/* /etc/nova/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/nova

$ mv /etc/nova/nova.conf.sample /etc/nova/nova.conf

$ mv /etc/nova/logging_sample.conf logging.conf

$ sudo chown root.root /etc/nova/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/nova/rootwrap.d                 # root 권한 필요


$ sudo mkdir -p /var/lib/nova

$ sudo chown stack.stack /var/lib/nova

$ sudo mkdir -p /var/lib/nova/instances

$ sudo chown stack.stack /var/lib/nova/instances

$ sudo mkdir -p /var/lock/nova

$ sudo chown stack.stack /var/lock/nova

$ sudo mkdir -p /var/run/nova

$ sudo chown stack.stack /var/run/nova


nova.conf, logging.conf 복사 (Controller node 에서 수행)

scp /etc/nova/logging.conf nova.conf stack@compute:/etc/nova/.


4. nova conf 설정

$ vi /etc/nova/nova.conf


[DEFAULT]

rabbit_host=controller

rabbit_password=rabbit

rpc_backend=rabbit

my_ip=192.168.230.153

state_path=/var/lib/nova

rootwrap_config=/etc/nova/rootwrap.conf

api_paste_config=api-paste.ini

auth_strategy=keystone

allow_resize_to_same_host=true

network_api_class=nova.network.neutronv2.api.API

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

force_dhcp_release=true

security_group_api=neutron

lock_path=/var/lock/nova

debug=true

verbose=true

log_dir=/var/log/nova

compute_driver=libvirt.LibvirtDriver

firewall_driver=nova.virt.firewall.NoopFirewallDriver

novncproxy_base_url=http://controller:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=controller


[cinder]

catalog_info=volume:cinder:publicURL


[database]

connection = mysql://nova:nova_dbpass@controller/nova


[glance]

host=controller


[keystone_authtoken]

auth_uri=http://controller:5000

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova_pass


[libvirt]

use_virtio_for_bridges=true

virt_type=kvm


[neutron]

metadata_proxy_shared_secret=openstack

url=http://192.168.230.151:9696

admin_username=neutron

admin_password=neutron_pass

admin_tenant_name=service

admin_auth_url=http://controller:5000/v2.0

auth_strategy=keystone


5. init script 등록

$ sudo vi /etc/init/nova-compute.conf


description "Nova compute server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "nova-compute --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-compute.log" stack


$ sudo service nova-compute start



[ Neutron Compute 설치 ]


1. Neutron package 설치

$ git clone http://git.openstack.org/openstack/neutron.git

$ cd neutron

$ git checkout -b 2014.2.1 tags/2014.2.1

$ sudo pip install -e .


2. conf 및 log 디렉토리 생성

$ sudo mkdir -p /var/log/neutron

$ sudo chown -R stack.stack /var/log/neutron

$ sudo mkdir -p /etc/neutron

$ sudo cp ~/Git/neutron/etc/*.ini *.conf *.json /etc/neutron/.

$ sudo cp -R ~/Git/neutron/etc/neutron/* /etc/neutron/.


3. conf owner 변경

$ sudo chown -R stack.stack /etc/neutron

$ sudo chown root.root /etc/neutron/rootwrap.conf                 # root 권한 필요

$ sudo chown -R root.root /etc/neutron/rootwrap.d


$ sudo mkdir -p /var/lib/neutron

$ sudo chown stack.stack /var/lib/neutron

$ sudo mkdir -p /var/run/neutron

$ sudo chown stack.stack /var/run/neutron


etc 파일을 복사 (Network Node 로 부터)

scp /etc/neutron/* stack@compute:/etc/neutron/.

$ scp /etc/neutron/plugins/ml2/ml2_conf.ini stack@compute:/etc/neutron/plugins/ml2/.


4. neutron conf 설정

$ vi /etc/neutron/neutron.conf


[DEFAULT]

verbose = True

debug = True

state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = ml2

service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://controller:8774/v2

nova_region_name = regionOne

nova_admin_username = nova

nova_admin_tenant_id = service

nova_admin_password = nova_pass

nova_admin_auth_url = http://controller:35357/v2.0

rabbit_host=controller

rabbit_password=rabbit

notification_driver=neutron.openstack.common.notifier.rpc_notifier

rpc_backend=rabbit


[agent]

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf


[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass


[database]

connection = mysql://neutron:neutron_dbpass@controller/neutron


5. ml2 conf 설정

$ vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vxlan

mechanism_drivers = openvswitch,linuxbridge,l2population


[ml2_type_vxlan]

vni_ranges = 1001:2000

vxlan_group = 239.1.1.1


[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[agent]                                          # Compute Node 에 추가

enable_distributed_routing = True

tunnel_types = vxlan

l2_population = True


[ovs]                                             #Compute Node 에 추가

local_ip = 192.168.200.153

tunnel_types = vxlan

tunnel_id_ranges = 1001:2000

enable_tunneling = True

bridge_mappings = external:br-ex


6. L3 agent conf 설정

$ vi /etc/neutron/l3_agent.ini


[DEFAULT]

debug = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

agent_mode = dvr


7. DHCP agent conf 설정

vi /etc/neutron/dhcp_agent.ini


[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

enable_isolated_metadata = True

enable_metadata_network = True

dhcp_delete_namespaces = True

verbose = True


8. metadata agent conf 설정

$ vi /etc/neutron/metadata_agent.ini


[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne                    # RegionOne 으로 쓰면 에러

admin_tenant_name = service

admin_user = neutron

admin_password = neutron_pass

nova_metadata_ip = controller

metadata_proxy_shared_secret = openstack

verbose = True


$ keystone endpoint-list                    # Region 을 확인 후에 설정


9. Bridge 및 port 생성

$ sudo ovs-vsctl add-br br-ex

$ sudo ovs-vsctl add-port br-ex eth0

$ sudo ovs-vsctl add-br br-tun

$ sudo ovs-vsctl add-port br-tun eth2


10. init script 등록

$ sudo vi /etc/init/neutron-openvswitch-agent.conf


description "Neutron OpenVSwitch Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-openvswitch-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-openvswitch-agent.log" stack


$ sudo service neutron-openvswitch-agent start


$ sudo vi /etc/init/neutron-l3-agent.conf


description "Neutron L3 Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


pre-start script

  # Check to see if openvswitch plugin in use by checking

  # status of cleanup upstart configuration

  if status neutron-ovs-cleanup; then

    start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running WAITER=neutron-l3-agent

  fi

end script


exec su -c "neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/neutron-l3-agent.log" stack


$ sudo service neutron-l3-agent start


$ sudo vi /etc/init/neutron-metadata-agent.conf


description "Neutron metadata Agent server"

author "somebody"


start on (local-filesystems and net-device-up IFACE!=lo)

stop on runlevel [016]


respawn


exec su -c "neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/neutron-metadata-agent.log" stack


$ sudo service neutron-metadata-agent start


11. Neutron Service Restart

$ vi service-neutron.sh


#!/bin/bash


sudo service neutron-openvswitch-agent $1

sudo service neutron-metadata-agent $1

sudo service neutron-l3-agent $1


$ chmod 755 service-neutron.sh

$ ./service-neutron.sh restart




External Network 생성

$ neutron net-create ext-net --router:external True --provider:physical_network external --provider:network_type flat


neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.75.193,end=192.168.75.254 --disable-dhcp --gateway 192.168.75.2 192.168.75.0/24


Internal Network 생성

neutron net-create demo-net --provider:network_type vxlan 

neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.1/24


Router 생성

$ neutron router-create demo-router

$ neutron router-interface-add demo-router demo-subnet

$ neutron router-gateway-set demo-router ext-net


Security rule 등록

$ neutron security-group-rule-create --protocol icmp --direction ingress default

$ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default


MTU 세팅

1. /etc/network/interfaces 파일에 세팅

auto eth2

iface eth2 inet static

address 192.168.200.152

netmask 255.255.255.0

mtu 9000


$ sudo ifdown eth2

$ sudo ifup eth2


2. 동적으로 세팅 (리부팅 필요)

$ ifconfig eth2 mtu 9000

$ reboot


route gw 등록

$ sudo route add -net "0.0.0.0/0" gw "10.0.0.1"


VM 생성

$ nova boot test01 --flavor 1 --image 10f9779f-b67d-45dc-ac9b-cf6a30f88b59 --nic net-id=0a4c3188-3500-45a4-83f6-416e686d645e


floating ip 추가

$ neutron floatingip-create ext-net

$ neutron floatingip-associate [floatingip-id] [fixedip-port-id]


metadata 호출

cirros vm 안에서

$ wget http://169.254.169.254/latest/meta-data/instance-id


Controller 노드에서 metadata 바로 호출하기

$ curl \

  -H 'x-instance-id: e9b12a36-ae7a-4d2c-be03-319655789927' \

  -H 'x-tenant-id: 7d7c68c1d33f4ffb8a7c5bca770e394c' \

  -H 'x-instance-id-signature: \

       80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f' \

  http://192.168.230.230:8775/latest/meta-data


[ x-instance-id-signature 구하기 ]

>>> import hmac

>>> import hashlib

>>> hmac.new('opensack', 'e9b12a36-ae7a-4d2c-be03-319655789927', hashlib.sha256).hexdigest()

'80f2d3ed5615bc93ccd7800e58780ba3fa754763ad0b2574240b8d4699bb254f'

>>>


코멘트 제외하고 설정정보 보기

cat /etc/nova/nova.conf  | grep -v ^# | grep -v ^$


vxlan 연결 여부 확인

$ sudo ovs-ofctl show br-tun


net 을 제거하는 순서

1. router 와 subnet 의 인터페이스 제거

$ neutron router-interface-delete [router-id] [subnet-id]


2. subnet 삭제

$ neutron subnet-delete [subnet-id]


3. net 삭제

$ neutron net-delete [net-id]


pip 으로 설치할 수 있게 배포판 만들기

$ sudo python setup.py sdist --formats=gztar

























반응형
Posted by seungkyua@gmail.com
,
반응형

1. apt-get 을 위한 키 다운로드

$ wget -qO - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -


2. repositoriy 등록

vi /etc/apt/sources.list


deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main


3. install

apt-get update && apt-get install elasticsearch


4. 설치 위치

- binary 설치 위치

/usr/share/elasticsearch


- 구성파일

/etc/elasticsearch/elasticsearch.yml


- init 스크립트

/etc/init.d/elasticsearch


- 환경설정 파일

/etc/default/elasticsearch




반응형
Posted by seungkyua@gmail.com
,
반응형

1. 문제 풀이


package id110101;


/* @BEGIN_OF_SOURCE_CODE */


/* @JUDGE_ID: 75154 100 Java "" */


import java.io.*;

import java.util.*;


public class The3nPlus1 {

public static void main(String[] args) {

The3nPlus1 main = new The3nPlus1();

main.begin();


}

private void begin() {

String line;

StringTokenizer st;

long first, second;

long maxNumber;

while ((line = readLine(255)) != null && !line.isEmpty()) {

st = new StringTokenizer(line);

first = Long.parseLong(st.nextToken());

second = Long.parseLong(st.nextToken());

maxNumber = doCalc(first, second);

System.out.println(String.format("%d %d %d", first, second, maxNumber));

}

}

private long doCalc(long first, long second) {

long value = 0;

long maxNumber = 0;

long number = 0;

for (value = first; value < second + 1; value++ ) {

number = doLoop(value);

maxNumber = maxNumber > number ? maxNumber : number;

}

return maxNumber;

}

private long doLoop(long number) {

long value = 0;

long count = 1;

for (value = number; value > 0; ) {

if (value == 1) break;

if (value % 2 == 1) {

value = 3 * value + 1;

} else {

value = value / 2;

}

count++;

}

return count;

}

private static String readLine(int maxLg) {

byte lin[] = new byte[maxLg];

int lg = 0;

int car = -1;

String line = "";

try {

while (lg < maxLg) {

car = System.in.read();

// 10: Line feed   13: Carriage return

if ((car < 0) || (car == 10) ) {

break;

}

lin[lg++] += car;

}

} catch (IOException e) {

return null;

}

if ((car < 0) && (lg == 0)) return null;

line = new String(lin, 0, lg);

line = line.replaceAll("[\n\r]", "");

return line;

}


}


/* @END_OF_SOURCE_CODE */


반응형
Posted by seungkyua@gmail.com
,
반응형

1. 디렉토리 구조

ProgrammingChallenges

  - classes

  - datafiles

     - id110101

        - The3nPlus1-in

        - The3nPlus1-out

  - src

     - id110101

        - The3nPlus1.java


  test.bat


2. test.bat

@echo off


set PACKAGE_NAME=id110101

set PROGRAM_NAME=The3nPlus1



set OUTPUT_DIR=classes

set DATA_DIR=datafiles


javac -d %OUTPUT_DIR% src/%PACKAGE_NAME%/%PROGRAM_NAME%.java

cd %OUTPUT_DIR%

java -cp . %PACKAGE_NAME%.%PROGRAM_NAME% < ../%DATA_DIR%/%PACKAGE_NAME%/%PROGRAM_NAME%-in > out


@echo on

fc out ../%DATA_DIR%/%PACKAGE_NAME%/%PROGRAM_NAME%-out


@echo off

cd ..


3. 기본 입출력 틀

package id110101;


/* @BEGIN_OF_SOURCE_CODE */


/* @JUDGE_ID: 75154 100 Java "" */


import java.io.*;

import java.util.*;


public class The3nPlus1 {

public static void main(String[] args) {

The3nPlus1 main = new The3nPlus1();

main.begin();


}

private void begin() {

String line;

StringTokenizer st;

long first, second;

while ((line = readLine(255)) != null && !line.isEmpty()) {

st = new StringTokenizer(line);

first = Long.parseLong(st.nextToken());

second = Long.parseLong(st.nextToken());

System.out.println(String.format("%d %d", first, second));

}

}

private static String readLine(int maxLg) {

byte lin[] = new byte[maxLg];

int lg = 0;

int car = -1;

String line = "";

try {

while (lg < maxLg) {

car = System.in.read();

// 10: Line feed   13: Carriage return

if ((car < 0) || (car == 10) ) {

break;

}

lin[lg++] += car;

}

} catch (IOException e) {

return null;

}

if ((car < 0) && (lg == 0)) return null;

line = new String(lin, 0, lg);

line = line.replaceAll("[\n\r]", "");

return line;

}


}


/* @END_OF_SOURCE_CODE */















반응형
Posted by seungkyua@gmail.com
,