반응형

0. 서버 설정

Master : 192.168.75.129  (etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler)

Node   : 192.168.75.130  (flannel, kube-proxy, kubelet)


gpasswd -a stack sudo  (? 안되는데??)


0. Kubernetes 소스 다운로드 및 WebStorm 지정

# 소스 다운로드

Go 설치 및 패스 (http://ahnseungkyu.com/204)

$ cd ~/Documents/go_workspace/src

$ go get k8s.io/kubernetes


$ cd k8s.io/kubernetes

$ git checkout -b v1.1.2 tags/v1.1.2


# WebStorm  New Project 로 Go 프로젝트 생성

경로 : ~/Documents/go_workspace/src/k8s.io/kubernetes


# WebStorm >> Preferences >> Languages & Frameworks >> Go >> Go SDK 에 추가

Path : /usr/local/go


# WebStorm >> Preferences >> Languages & Frameworks >> Go >> Go Libraries >> Project libraries 에 아래 경로 추가

경로 : Documents/go_workspace/src/k8s.io/kubernetes/Godeps/_workspace



[ Master Minion 서버에 모두 설치 ]

1. apt-get 으로 필요 s/w 설치

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

deb https://apt.dockerproject.org/repo ubuntu-wily main


# Ubuntu Xenial (16.04)

deb https://apt.dockerproject.org/repo ubuntu-xenial main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

$ sudo apt-get install curl

$ sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo systemctl start docker.service



2. go apt-get 설치

$ sudo apt-get install linux-libc-dev golang gcc

$ sudo apt-get install ansible



3. host 파일 등록 (모든 서버에, root 계정으로 수행)

echo "192.168.75.129 kube-master

192.168.75.130 kube-node01" >> /etc/hosts



[ Kubernetes Master 설치 ]


4. etcd 설치

https://github.com/coreos/etcd/releases

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.2/etcd-v2.2.2-linux-amd64.tar.gz -o etcd-v2.2.2-linux-amd64.tar.gz

$ tar xzvf etcd-v2.2.2-linux-amd64.tar.gz

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcd /usr/bin

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcdctl /usr/bin


$ sudo mkdir -p /var/lib/etcd/member

$ sudo chmod -R 777 /var/lib/etcd


$ sudo vi /etc/network-environment

# The master's IPv4 address - reachable by the kubernetes nodes.

NODE_NAME=kube-master

MASTER_NAME=kube-master

NODE_NAME_01=kube-node01


sudo vi /lib/systemd/system/etcd.service

[Unit]

Description=etcd

After=network-online.service


[Service]

EnvironmentFile=/etc/network-environment          # 혹은 /etc/default/etcd.conf

PermissionsStartOnly=true

ExecStart=/usr/bin/etcd \

--name ${NODE_NAME} \

--data-dir /var/lib/etcd \

--initial-advertise-peer-urls http://192.168.75.129:2380 \

--listen-peer-urls http://192.168.75.129:2380 \

--listen-client-urls http://192.168.75.129:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.75.129:2379 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster ${MASTER_NAME}=http://kube-master:2380,${NODE_NAME_01}=http://kube-node01:2380 \

--initial-cluster-state new

Restart=always

RestartSec=10s


[Install]

WantedBy=multi-user.target

Alias=etcd.service


$ cd /lib/systemd/system

$ sudo chmod 775 etcd.service


$ sudo systemctl enable etcd.service

sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl start etcd.service



$ etcdctl set /coreos.com/network/config "{\"Network\":\"172.16.0.0/16\"}"

$ etcdctl set /coreos.com/network/subnets/172.16.10.0-24 "{\"PublicIP\":\"192.168.75.129\"}"

$ etcdctl set /coreos.com/network/subnets/172.16.93.0-24 "{\"PublicIP\":\"192.168.75.130\"}"


$ etcdctl ls /                          # etcdctl ls --recursive (전체 다 보임)

/coreos.com/network/config

/coreos.com/network/subnets/172.16.10.0-24

/coreos.com/network/subnets/172.16.93.0-24

/registry


$ etcdctl get /coreos.com/network/config

{"Network":"172.16.0.0/16"}


$ etcdctl get /coreos.com/network/subnets/172.16.10.0-24     # Master의 flannel0 bridge ip

{"PublicIP":"192.168.75.129"}


$ etcdctl get /coreos.com/network/subnets/172.16.93.0-24     # Node01의 flannel0 bridge ip

{"PublicIP":"192.168.75.130"}



5. flannel 설치
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.4 tags/v0.5.4     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


$ sudo netstat -tulpn | grep etcd          # etcd 떠 있는 포트를 확인

sudo flanneld -etcd-endpoints=http://kube-master:4001 -v=0


$ cd /lib/systemd/system

$ sudo vi flanneld.service


[Unit]

Description=flanneld Service

After=etcd.service

Requires=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

PermissionsStartOnly=true

User=root

ExecStart=/usr/bin/flanneld \

-etcd-endpoints http://localhost:4001,http://localhost:2379 \

-v=0

Restart=always

RestartSec=10s

RemainAfterExit=yes


[Install]

WantedBy=multi-user.target

Alias=flanneld.service



$ sudo systemctl enable flanneld.service

$ sudo systemctl start flanneld.service



6. Kubernetes API Server 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.1 origin/release-1.1

$ sudo make release


$ cd _output/release-tars

$ sudo tar zxvf kubernetes-server-linux-amd64.tar.gz


$ cd ~

git clone https://github.com/kubernetes/contrib.git

$ sudo cp -R ~/downloads/kubernetes/_output/* ~/downloads/contrib/ansible/roles/

$ cd ~/downloads/contrib/ansible/roles

$ sudo chown stack.stack -R *

$ vi  ~/downloads/contrib/ansible/inventory

[masters]

kube-master


[etcd]

kube-master


[nodes]

kube-node01



$ sudo su -

# ssh-keygen

# for node in kube-master kube-node01; do

ssh-copy-id ${node}

done

# exit


$ vi ~/downloads/contrib/ansible/group_vars/all.yml

source_type: localBuild

cluster_name: cluster.local

ansible_ssh_user: root

kube_service_addresses: 10.254.0.0/16

networking: flannel

flannel_subnet: 172.16.0.0

flannel_prefix: 12

flannel_host_prefix: 24

cluster_logging: true

cluster_monitoring: true

kube-ui: true

dns_setup: true

dns_replicas: 1


$ cd ~/downloads/contrib/ansible

$ ./setup.sh








sudo cp kubernetes/server/bin/kube-apiserver /usr/bin

$ sudo cp kubernetes/server/bin/kube-controller-manager /usr/bin

$ sudo cp kubernetes/server/bin/kube-scheduler /usr/bin

sudo cp kubernetes/server/bin/kubectl /usr/bin

sudo cp kubernetes/server/bin/kubernetes /usr/bin


sudo mkdir -p /var/log/kubernetes

$ sudo chown -R stack.docker /var/log/kubernetes/


$ cd /lib/systemd/system

$ sudo vi kube-apiserver.service


[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

ExecStart=/usr/bin/kube-apiserver \

--api-rate=10 \

--bind-address=0.0.0.0 \

--etcd_servers=http://127.0.0.1:4001 \

--portal_net=10.254.0.0/16 \                              # 어디서 쓰는 거지?

--insecure-bind-address=0.0.0.0 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--kubelet_port=10250 \

--service_account_key_file=/tmp/kube-serviceaccount.key \

--service_account_lookup=false \

--service-cluster-ip-range=172.16.0.0/16            # flannel 과 연동해야 하나?

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-apiserver.service


$ sudo systemctl enable kube-apiserver.service

$ sudo systemctl start kube-apiserver.service


sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl restart kube-apiserver


6. Kubernetes Controller Manager 설치

$ cd /lib/systemd/system

sudo vi kube-controller-manager.service


[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

ExecStart=/usr/bin/kube-controller-manager \

--address=0.0.0.0 \

--master=127.0.0.1:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true 

#--service_account_private_key_file=/tmp/kube-serviceaccount.key

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-controller-manager.service


$ sudo systemctl enable kube-controller-manager.service

$ sudo systemctl start kube-controller-manager.service


$ sudo systemctl daemon-reload

$ sudo systemctl restart kube-controller-manager


7. Kubernetes Scheduler 설치

$ cd /lib/systemd/system

sudo vi kube-scheduler.service


[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

ExecStart=/usr/bin/kube-scheduler \

--master=127.0.0.1:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-scheduler.service


sudo systemctl enable kube-scheduler.service

$ sudo systemctl start kube-scheduler.service


8. etcd 에 flannel 에서 사용할 ip range 등록  (flannel 을 node 에서 사용해야 필요함)

$ sudo etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'



[ Service Cluster IP Range ]

10.0.0.0 - 10.255.255.255 (10/8 prefix)

172.16.0.0 - 172.31.255.255 (172.16/12 prefix)

192.168.0.0 - 192.168.255.255 (192.168/16 prefix)




[ Kubernetes Minion 설치 ]


4. etcd 설치

https://github.com/coreos/etcd/releases

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.2/etcd-v2.2.2-linux-amd64.tar.gz -o etcd-v2.2.2-linux-amd64.tar.gz

$ tar xzvf etcd-v2.2.2-linux-amd64.tar.gz

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcd /usr/bin

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcdctl /usr/bin


$ sudo mkdir -p /var/lib/etcd/member

$ sudo chmod -R 777 /var/lib/etcd


$ sudo vi /etc/network-environment

# The master's IPv4 address - reachable by the kubernetes nodes.

NODE_NAME=kube-node01

MASTER_NAME=kube-master

NODE_NAME_01=kube-node01


sudo vi /lib/systemd/system/etcd.service

[Unit]

Description=etcd

After=network-online.service


[Service]

EnvironmentFile=/etc/network-environment          # 혹은 /etc/default/etcd.conf

PermissionsStartOnly=true

ExecStart=/usr/bin/etcd \

--name ${NODE_NAME} \

--data-dir /var/lib/etcd \

--initial-advertise-peer-urls http://192.168.75.130:2380 \

--listen-peer-urls http://192.168.75.130:2380 \

--listen-client-urls http://192.168.75.130:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.75.130:2379 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster ${MASTER_NAME}=http://kube-master:2380,${NODE_NAME_01}=http://kube-node01:2380 \

--initial-cluster-state new

Restart=always

RestartSec=10s


[Install]

WantedBy=multi-user.target

Alias=etcd.service


$ cd /lib/systemd/system

$ sudo chmod 775 etcd.service


$ sudo systemctl enable etcd.service

sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl start etcd.service


$ etcdctl member list


5. flannel 설치
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.5 tags/v0.5.5     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


$ sudo netstat -tulpn | grep etcd          # etcd 떠 있는 포트를 확인

sudo flanneld -etcd-endpoints=http://kube-node01:4001,http://kube-node01:2379 -v=0


$ cd /lib/systemd/system

$ sudo vi flanneld.service


[Unit]

Description=flanneld Service

After=etcd.service

Requires=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

PermissionsStartOnly=true

User=root

ExecStart=/usr/bin/flanneld \

-etcd-endpoints http://kube-node01:4001,http://kube-node01:2379 \

-v=0

Restart=always

RestartSec=10s

RemainAfterExit=yes


[Install]

WantedBy=multi-user.target

Alias=flanneld.service



$ sudo systemctl enable flanneld.service

$ sudo systemctl start flanneld.service




8. Kubernetes Proxy 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ sudo make release


$ cd _output/release-tars

$ sudo tar xvf kubernetes-server-linux-amd64.tar.gz


sudo cp kubernetes/server/bin/kube-proxy /usr/bin

$ sudo cp kubernetes/server/bin/kubelet /usr/bin

sudo cp kubernetes/server/bin/kubectl /usr/bin

sudo cp kubernetes/server/bin/kubernetes /usr/bin


sudo mkdir -p /var/log/kubernetes

$ sudo chown -R stack.docker /var/log/kubernetes/


$ cd /lib/systemd/system

sudo vi kube-proxy.service


[Unit]

Description=Kubernetes Proxy

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

ExecStart=/usr/bin/kube-proxy \

--master=http://kube-master:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--v=0                                                     # debug 모드

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-proxy.service


$ sudo systemctl enable kube-proxy.service

$ sudo systemctl start kube-proxy.service



9. Kubernetes Kubelet 설치

$ cd /lib/systemd/system

sudo vi kubelet.service


[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

ExecStart=/usr/bin/kubelet \

--address=0.0.0.0 \

--port=10250 \

--hostname_override=kube-minion \

--api_servers=http://kube-master:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--cluster_domain=cluster.local \

--v=0                                                      # debug 모드

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kubelet.service


$ sudo systemctl enable kubelet.service

$ sudo systemctl start kubelet.service


# docker 서비스 restart

$ sudo service docker restart

10. flannel 설치 (etcd 의 Network 등 설정 값을 가지고 옴) - 동작 확인 필요
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.1 tags/v0.5.1     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


sudo flanneld -etcd-endpoints=http://kube-master:4001 -v=0



10. 설치한 node 확인

sudo kubectl get nodes


NAME                 LABELS                                                    STATUS

192.168.75.202   kubernetes.io/hostname=192.168.75.202    NotReady

kube-minion        kubernetes.io/hostname=kube-minion         Ready


11. 서비스 올리기

# Master 서버

$ sudo systemctl start etcd.service

$ sudo systemctl start kube-apiserver.service

$ sudo systemctl start kube-controller-manager.service

$ sudo systemctl start kube-scheduler.service


# Minion 서버

$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kubelet.service



12. mysql 서비스 올리기

mkdir pods

$ pods

$ vi mysql.yaml

apiVersion: v1

kind: Pod

metadata:

  name: mysql

  labels:

    name: mysql

spec:

  containers:

    - resources:

        limits :

          cpu: 1

      image: mysql

      name: mysql

      env:

        - name: MYSQL_ROOT_PASSWORD

          # change this

          value: root

      ports:

        - containerPort: 3306

          name: mysql


$ sudo kubectl create -f mysql.yaml

$ sudo kubectl get pods


$ vi mysql-service.yaml

apiVersion: v1

kind: Service

metadata:

  labels:

    name: mysql

  name: mysql

spec:

  publicIPs:

    - 192.168.75.202

  ports:

    # the port that this service should serve on

    - port: 3306

  # label keys and values that must match in order to receive traffic for this service

  selector:

    name: mysql


$ sudo kubectl create -f mysql-service.yaml

$ sudo kubectl get services







**************************************************

*****  juju 로 설치  (실패)                               ***********

**************************************************

1. juju 설치

sudo add-apt-repository ppa:juju/stable

$ sudo apt-get update

$ sudo apt-get install juju-core juju-quickstart

juju quickstart u/kubernetes/kubernetes-cluster












**************************************************

*****  여기는 참고                                          ***********

**************************************************


3. flannel 설치

$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.1 tags/v0.5.1

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ cp bin/flanneld /opt/bin




4. etcd 설치

https://github.com/coreos/etcd/releases

$ curl -L  https://github.com/coreos/etcd/releases/download/v2.1.1/etcd-v2.1.1-linux-amd64.tar.gz -o etcd-v2.1.1-linux-amd64.tar.gz

$ tar xzvf etcd-v2.1.1-linux-amd64.tar.gz

$ sudo cp  etcd-v2.1.1-linux-amd64/bin/etcd* /opt/bin

$ cd /var/lib

$ sudo mkdir etcd

$ sudo chown stack.docker etcd

sudo mkdir /var/run/kubernetes

$ sudo chown stack.docker /var/run/kubernetes

sudo vi /etc/default/etcd

ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"



3. Kubernetes Master 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ cd cluster/ubuntu/

$ ./build.sh            # binaries 디렉토리로 다운로드 함


# Add binaries to /usr/bin

$ sudo cp -f binaries/master/* /usr/bin

$ sudo cp -f binaries/kubectl /usr/bin


$ wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz

$ tar -xvf master.tar.gz

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd


$ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment

$ vi network-environment

#! /usr/bin/bash

# This node's IPv4 address

DEFAULT_IPV4=192.168.75.201


# The kubernetes master IP

KUBERNETES_MASTER=192.168.75.201


# Location of etcd cluster used by Calico.  By default, this uses the etcd

# instance running on the Kubernetes Master

ETCD_AUTHORITY=192.168.75.201:4001


# The kubernetes-apiserver location - used by the calico plugin

KUBE_API_ROOT=https://192.168.75.201:443/api/v1/


$ sudo mv -f network-environment /etc



$ sudo systemctl enable /etc/systemd/etcd.service

$ sudo systemctl enable /etc/systemd/kube-apiserver.service

$ sudo systemctl enable /etc/systemd/kube-controller-manager.service

$ sudo systemctl enable /etc/systemd/kube-scheduler.service


$ sudo systemctl start etcd.service

$ sudo systemctl start kube-apiserver.service

$ sudo systemctl start kube-controller-manager.service

$ sudo systemctl start kube-scheduler.service






4. Kubernetes Minion 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ cd cluster/ubuntu/

$ ./build.sh            # binaries 디렉토리로 다운로드 함


# Add binaries to /usr/bin

$ sudo cp -f binaries/minion/* /usr/bin


$ wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz

$ tar -xvf master.tar.gz

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd


$ sudo systemctl enable /etc/systemd/kube-proxy.service

$ sudo systemctl enable /etc/systemd/kube-kubelet.service


$ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment

$ vi network-environment

#! /usr/bin/bash

# This node's IPv4 address

DEFAULT_IPV4=192.168.75.201


# The kubernetes master IP

KUBERNETES_MASTER=192.168.75.201


# Location of etcd cluster used by Calico.  By default, this uses the etcd

# instance running on the Kubernetes Master

ETCD_AUTHORITY=192.168.75.201:4001


# The kubernetes-apiserver location - used by the calico plugin

KUBE_API_ROOT=https://192.168.75.201:443/api/v1/


$ sudo mv -f network-environment /etc



$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kube-kubelet.service












4. kubernetes 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

$ git checkout -b release-1.0 origin/release-1.0

$ sudo make release


$ cd _output/release-tars

$ sudo chown -R stack.docker *

$ tar xvf kubernetes-server-linux-amd64.tar.gz


$ sudo su -

$ echo "192.168.75.201 kube-master

192.168.75.202 kube-minion" >> /etc/hosts

$ exit





5. kubernetes Master 설치


# kube-master 에 뜨는 서비스

etcd

flanneld

kube-apiserver

kube-controller-manager

kube-scheduler


$ cd ~/kubernetes/_output/release-tars/kubernetes

$ cp server/bin/kube-apiserver /opt/bin/

$ cp server/bin/kube-controller-manager /opt/bin/

$ cp server/bin/kube-scheduler /opt/bin/

$ cp server/bin/kubectl /opt/bin/

$ cp server/bin/kubernetes /opt/bin/


$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/etcd.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-apiserver.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-controller-manager.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-scheduler.conf /etc/init/


$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/etcd /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-apiserver /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-controller-manager /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-scheduler /etc/init.d/


$ sudo vi /etc/default/kube-apiserver

KUBE_API_ADDRESS="--address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet_port=10250"

KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"

KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"

KUBE_API_ARGS=""



$ sudo vi /etc/default/kube-controller-manager

KUBELET_ADDRESSES="--machines=192.168.75.202"






6. Minion 설치


# kube-minion 에 뜨는 서비스

flanneld

kubelet

kube-proxy


cd ~/kubernetes/_output/release-tars/kubernetes

sudo cp server/bin/kubelet /opt/bin/

$ sudo cp server/bin/kube-proxy /opt/bin/

$ sudo cp server/bin/kubectl /opt/bin/

$ sudo cp server/bin/kubernetes /opt/bin/


$ sudo cp kubernetes/cluster/ubuntu/minion/init_conf/kubelet.conf /etc/init

$ sudo cp kubernetes/cluster/ubuntu/minion/init_conf/kube-proxy.conf /etc/init


$ sudo cp kubernetes/cluster/ubuntu/minion/init_scripts/kubelet /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/minion/init_scripts/kube-proxy /etc/init.d/












$ cd ~/kubernetes

$ vi cluster/ubuntu/config-default.sh

export nodes=${nodes:-"stack@192.168.75.201 stack@192.168.75.202"}

roles=${roles:-"ai i"}

export NUM_MINIONS=${NUM_MINIONS:-2}

export SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-192.168.3.0/24}

export FLANNEL_NET=${FLANNEL_NET:-172.16.0.0/16}


$ cd cluster

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh








3. go 소스 설치

https://golang.org/dl/

$ curl -L https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz -o go1.4.2.linux-amd64.tar.gz

$ tar xvf go1.4.2.linux-amd64.tar.gz

























반응형
Posted by seungkyua@gmail.com
,
반응형

1. git 설치하기

# apt-get install git-core git-review

# adduser gerrit

# mkdir -p /git_repo

# chown -R gerrit.gerrit /git_repo

# sudo mkdir -p /git_review

chown -R gerrit.gerrit /git_review

# git init --bare /git_repo/paas.git


2. gerrit 다운로드

https://gerrit-releases.storage.googleapis.com/index.html


3. mysql 설치

# mysql -uroot -p

mysql> CREATE USER 'gerrit'@'localhost' IDENTIFIED BY 'secret';

mysql> CREATE DATABASE reviewdb;

mysql> ALTER DATABASE reviewdb charset=utf8;

mysql> GRANT ALL ON reviewdb.* TO 'gerrit'@'localhost';

mysql> FLUSH PRIVILEGES;



4. apache2 설치

$ sudo apt-get install apache2 apache2-utils libapache2-mod-proxy-html libxml2-dev

$ sudo a2enmod proxy_http

$ sudo a2enmod proxy

$ sudo service apache2 restart


# sudo vi /etc/apache2/sites-available/gerrit.conf

<VirtualHost *:8080>

  ServerName localhost

  ProxyRequests Off

  ProxyVia Off

  ProxyPreserveHost On


  <Proxy *>

    Order deny,allow

    Allow from all

  </Proxy>


  <Location /login/>

    AuthType Basic

    AuthName "Gerrit Code Review"

    Require valid-user

    AuthUserFile /git_review/etc/passwords

  </Location>


  AllowEncodedSlashes On

  ProxyPass / http://127.0.0.1:8081/

  ProxyPassReverse / http://127.0.0.1:8081/                #외부 SSO 검증에 기반한 HTTP 인증

#  RequestHeader set REMOTE-USER %{REMOTE_USER} #외부 SSO 검증에 기반한 HTTP 인증

</VirtualHost>


$ cd /etc/apache2/sites-available

$ sudo a2ensite gerrit.conf

$ sudo vi /etc/apache2/ports.conf

Listen 8080


$ sudo service apache2 restart




5. gerrit site 설치

# apt-get install openjdk-7-jdk


# oracle java 를 설치하는 방법

# add-apt-repository ppa:webupd8team/java

# apt-get udpate

# apt-get install oracle-java7-installer



# su - gerrit

$ cd /git_review

$ cp /home/stack/Downloads/gerrit-2.11.3.war .

$ java -jar gerrit-2.11.3.war init -d /git_review

 *** Git Repositories

*** 


Location of Git repositories   [git]: /git_repo


*** SQL Database

*** 


Database server type           [h2]: mysql


Gerrit Code Review is not shipped with MySQL Connector/J 5.1.21

**  This library is required for your configuration. **

Download and install it now [Y/n]?

Downloading http://repo2.maven.org/maven2/mysql/mysql-connector-java/5.1.21/mysql-connector-java-5.1.21.jar ... OK

Checksum mysql-connector-java-5.1.21.jar OK

Server hostname                [localhost]: 

Server port                    [(mysql default)]: 

Database name                  [reviewdb]: 

Database username              [gerrit]:

gerrit2's password            : secret


*** Index

*** 


Type                           [LUCENE/?]: 


The index must be rebuilt before starting Gerrit:

  java -jar gerrit.war reindex -d site_path


*** User Authentication

*** 


Authentication method          [OPENID/?]: http

# Get username from custom HTTP header [y/N]? y                    # 외부 SSO HTTP 인증시

# Username HTTP Header [SM_USER]: REMOTE_USER_RETURN    # 외부 SSO HTTP 인증시

SSO logout URL  : http://aa:aa@192.168.75.141:8080/


*** Review Labels

*** 


Install Verified label         [y/N]? 


*** Email Delivery

*** 


SMTP server hostname       [localhost]: smtp.gmail.com

SMTP server port               [(default)]: 465

SMTP encryption                [NONE/?]: SSL

SMTP username                 [gerrit]: skanddh@gmail.com


*** Container Process

*** 


Run as                         [gerrit]: 

Java runtime                   [/usr/local/jdk1.8.0_31/jre]: 

Copy gerrit-2.11.3.war to /git_review/bin/gerrit.war [Y/n]? 

Copying gerrit-2.11.3.war to /git_review/bin/gerrit.war


*** SSH Daemon

*** 


Listen on address              [*]: 

Listen on port                 [29418]: 


Gerrit Code Review is not shipped with Bouncy Castle Crypto SSL v151

  If available, Gerrit can take advantage of features

  in the library, but will also function without it.

Download and install it now [Y/n]? N


*** HTTP Daemon

*** 


Behind reverse proxy           [y/N]? y

Proxy uses SSL (https://)      [y/N]? 

Subdirectory on proxy server   [/]: 

Listen on address              [*]: 127.0.0.1        # reverse 이기 때문에

Listen on port                 [8081]: 

Canonical URL                  [http://127.0.0.1/]:


java -jar bin/gerrit.war reindex -d /git_review


htpasswd -c /git_review/etc/passwords skanddh

# service apache2 restart



6. start/stop Daemon

$ /git_review/bin/gerrit.sh restart

$ /git_review/bin/gerrit.sh start

$ /git_review/bin/gerrit.sh stop


$ sudo ln -snf /git_review/bin/gerrit.sh /etc/init.d/gerrit.sh

$ sudo ln -snf /etc/init.d/gerrit.sh /etc/rc3.d/S90gerrit



[ HTTPS 활성화 ]

$ vi gerrit.conf

[httpd]

         listenUrl = proxy-https://127.0.0.1:8081/


$ vi /etc/httpd/conf/httpd.conf

LoadModule ssl_module modules/mod_ssl.so

LoadModule mod_proxy modules/mod_proxy.so

<VirtualHost _default_:443>

SSLEngine on

SSLProtocol all -SSLv2

SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW

SSLCertificateFile /etc/pki/tls/certs/server.crt

SSLCertifacteKeyFile /etc/pki/tls/private/server.key

SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt

ProxyPass / http://127.0.0.1:8081/

ProxyPassReverse / http://127.0.0.1:8081/

</VirtualHost>


인증서 생성

$ sudo mkdir -p /etc/pki/tls/private

$ sudo mkdir -p /etc/pki/tls/certs

$ sudo openssl req -x509 -days 3650 \

-nodes -newkey rsa:2048 \

-keyout /etc/pki/tls/private/server.key -keyform pem \

-out /etc/pki/tls/certs/server.crt -outform pem



-----

Country Name (2 letter code) [AU]:KO

State or Province Name (full name) [Some-State]:Seoul

Locality Name (eg, city) []:Seoul

Organization Name (eg, company) [Internet Widgits Pty Ltd]:MyCompany

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:myhost.mycompany.com

Email Address []:admin@myhost.mycompany.com

$ cd /etc/pki/tls/certs

sudo cp server.crt server-chain.crt



user.email 과 user.name 등록

$ git config user.name "Seungkyu Ahn"

$ git config user.email "skanddh@gmail.com"


password 등록

git config credential.helper cache                             # default 15분 저장

$ git config credential.helper 'cache --timeout=3600'      # 1시간 저장


커밋 메세지 hook 설치

curl -Lo .git/hooks/commit-msg http://localhost:8080/tools/hooks/commit-msg

$ chmod +x .git/hooks/commit-msg


review (gerrit remote url 등록)

git remote add gerrit http://localhost:8080/hello-project


# 서버 프로젝트에 미리 등록해서 clone 시 다운받을 수 있도록 함

$ vi .gitreview


[gerrit]

host=localhost

port=8080

project=hello-project

defaultbranch=master


$ git checkout -b bug/1

수정1

$ git add

$ git commit

$ git review

수정2

$ git add

$ git commit --amend

$ git review



review (직접하는 방법)

$ git checkout -b bug/1

수정1

$ git add

$ git commit

git push origin HEAD:refs/for/master%topic=bug/1



[ Jenkins 설치 ]

jenkins tomcat 의 webapps 디렉토리에 다운로드

# adduser jenkins

# chown -R jenkins.jenkins apache-tomcat-8.0.26

# su - jenkins


http://jenkins-ci.org/

$ cd /usr/local/apache-tomcat-8.0.26/webapps

$ wget http://updates.jenkins-ci.org/download/war/1.580.1/jenkins.war

wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war                       # 최신 버전


tomcat 포트 및 URIEndoing 변경

$ vi /usr/local/apache-tomcat-8.0.26/conf/server.xml


<Connector port="7070" protocol="HTTP/1.1"

           connectionTimeout="20000"

           redirectPort="8443"

           URIEncoding="UTF-8" />


/usr/local/apache-tomcat-8.0.26/bin/startup.sh


jenkins 접속

http://192.168.75.141:7070/jenkins/


웹화면에서 보안 설정

(좌측메뉴) Jenkins 관리

Configure Global Security

  - Enable security

  - Security Realm : Jenkins’ own user database

  - Authorization : Matrix-based security

  - User/group to add: admin


저장 후 admin 계정으로 가입



[ Jenkins 연동 ]

1. 젠킨스 플러그인 설치

1. Jenkins Git Client plugin

2. Jenkins Git Plugin : 젠킨스와 깃을 연동

3. Jenkins Gerrit Trigger plugin : 게릿 변경시 패치 세트를 가져와 빌드하고 점수를 남김

4. Hudson Gerrit plugin : 깃 플러그인 설정을 가능


2. 게릿 트리거 플러그인

1. HTTP/S Canonical URL: 게릿의 변경 및 패치 세트를 가리키는 URL

2. SSH 접속 : 게릿에 연결하여 게릿으로부터의 이벤트를 감지


jenkins를 띄운 사용자로 ssh 키 생성 및 게릿에 젠킨스가 사용할 배치 사용자를 생성

jenkins 계정으로 jenkins 를 실행하면 아래 내부 사용자 생성이 필요없음

사용자가 다르면 게릿의 관리자 계정으로 create-account 명령을 실행해서 내부 사용자를 생성

$ skanddh 계정으로 로그인

ssh-keygen -t rsa

ssh -p 29418 skanddh@192.168.75.141


# skanddh 가 gerrit 의 관리자 계정이어야 하며 skanddh 계정으로 실행

$ sudo cat /home/jenkins/.ssh/id_rsa.pub | \

ssh -p 29418 skanddh@192.168.75.141 gerrit create-account \

--group "'Non-Interactive Users'" --full-name Jenkins \

--email jenkins@localhost.com \ --ssh-key - jenkins


All-Projects에 있는 Non-Interactive Users 그룹에 아래의 권한이 있는지 확인

1. Stream events 권한이 있으면 게릿의 변경 발생을 원격으로 감지

2. refs/* 에 Read 권한이 있으면 gerrit 저장소의 변경 사항을 읽고 clone 가능

3. refs/heads/* 에 대한 Label Code-Review(Verified) -1..+1 권한이 잇으면 변경에 대해 점수 부여 가능


게릿 트리거 플러그인 설정

jenkins url 접속

http://192.168.75.141:7070/jenkins/gerrit-trigger


1. URL 과 SSH 연결을 설정

    Name : Gerrit

    Hostname : 192.168.75.141

    Frontend URL : http://192.168.75.141:8080

    SSH Port : 29418

    Username : jenkins

    E-mail : jenkins@localhost.com

    SSH Keyfile : /home/jenkins/.ssh/id_rsa

    SSH Keyfile Password :


2. Test Connection 으로 테스트

3. 설정 페이지 맨 아래 Start/Stop 버튼으로 젠킨스 재시작


jenkins url 접속

http://192.168.75.141:7070/jenkins/gerrit_manual_trigger

Query 입력 란에 status:open 입력 -> Search 버튼 클릭

http://192.168.75.141:8080/#q/status:open,n,z 페이지에서 리뷰 대기 중인 변경 확인


게릿 트리거 설정

게릿 트리거 실행 조건을 SCM polling(또는 다른 트리거 정책)에서 Gerrit Event 로 변경

게릿 트리거 설정 부분에서 Advanced 버튼으로 게릿 조건을 지정


깃 플러그인 설정 (Hudson Gerrit plugin 을 설치해야 나옴)

깃 플러그인에서 게릿의 ref-spec 다음에 추가

Advanced 버튼 클릭 하여 깃 저장소 설정 변경

1. $GERRIT_REFSPEC 을 복제해야 할 깃 refspec 으로 지정

2. $GERRIT_PATCHSET_REVISION을 빌드할 깃 브랜치로 지정

3. 트리거 방식을 Gerrit trigger로 지정


아래 두가지 활성화

1. Wipe out workspace : 작업 공간 비우기

2. Use shallow clone : 얕은 복제 사용





반응형
Posted by seungkyua@gmail.com
,
반응형

Compute IP          : 172.23.147.187

가상 IP  NAT         :  192.168.75.0

가상 IP  Host-Only :  192.168.230.0


1. HP Helion OpenStack Community Version 다운로드

https://helion.hpwsportal.com/catalog.html#/Home/Show

# mkdir -p /root/work

# tar -xzvf HP_Helion_OpenStack_1.1.1.tgz -C /root/work



2. 설치 문서

http://docs.hpcloud.com/helion/community/install-virtual/


3. sudo 세팅

$ sudo visudo

stack   ALL=(ALL:ALL) NOPASSWD: ALL


4. root 접속 및 rsa 키 생성

$ sudo su -

ssh-keygen -t rsa


# s/w 설치

# apt-get update

# apt-get dist-upgrade

# sudo su -l -c "apt-get install -y qemu-kvm libvirt-bin openvswitch-switch openvswitch-common python-libvirt qemu-system-x86 ntpdate ntp openssh-server"


5. ntp 서버 설정

# ntpdate -u time.bora.net

# vi /etc/ntp.conf

...

#server 0.ubuntu.pool.ntp.org

#server 1.ubuntu.pool.ntp.org

#server 2.ubuntu.pool.ntp.org

#server 3.ubuntu.pool.ntp.org

server time.bora.net

...

restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap



# Use Ubuntu's ntp server as a fallback.

#server ntp.ubuntu.com

server 127.127.1.0

...


sudo /etc/init.d/ntp restart

# ntpq -p                             # ntp 상태 보기

# dpkg-reconfigure ntp         # ntp 에러 날 때




5. unpacking

# mkdir work

# cd work

tar zxvf /{full path to downloaded file from step 2}/Helion_Openstack_Community_V1.4.tar.gz



7. VM 사양 조정

vi /root/vm_plan.csv

,,,,2,4096,512,Undercloud

,,,,2,24576,512,OvercloudControl

,,,,2,8192,512,OvercloudSwiftStorage

,,,,4,16384,512,OvercloudCompute



6. start seed vm

export SEED_NTP_SERVER=192.168.122.1

export NODE_MEM=4096

HP_VM_MODE=y bash -x /root/work/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --create-seed --vm-plan /root/vm_plan.csv 2>&1|tee seedvminstall.log



7. Under Cloud, Over Cloud 생성

# seed vm 접속

ssh 192.0.2.1


# 변수 세팅

# export OVERCLOUD_CONTROLSCALE=1

export OVERCLOUD_SWIFTSTORAGESCALE=1

export OVERCLOUD_SWIFT_REPLICA_COUNT=1

export ENABLE_CENTRALIZED_LOGGING=0

export USE_TRICKLE=0

export OVERCLOUD_STACK_TIMEOUT=240

export UNDERCLOUD_STACK_TIMEOUT=240

export OVERCLOUD_NTP_SERVER=192.168.122.1

export UNDERCLOUD_NTP_SERVER=192.168.122.1

export FLOATING_START=192.0.8.140

export FLOATING_END=192.0.8.240

export FLOATING_CIDR=192.0.8.0/21

export OVERCLOUD_NEUTRON_DVR=False



# 로케일 변경

export LANGUAGE=en_US.UTF-8

export LANG=en_US.UTF-8

export LC_ALL=en_US.UTF-8



# start Under Cloud

bash -x tripleo/tripleo-incubator/scripts/hp_ced_installer.sh 2>&1|tee stackinstall.log



8. 아래 IP 확인

OVERCLOUD_IP_ADDRESS  : 192.0.2.23

UNDERCLOUD_IP_ADDRESS  : 192.0.2.2



9. 설치 확인하기

# demo, admin 유저의 패스워드 확인

cat /root/tripleo/tripleo-undercloud-passwords

cat /root/tripleo/tripleo-overcloud-passwords


10. seed vm에 접속한 후 undercloud ip 보기

# . /root/stackrc

UNDERCLOUD_IP=$(nova list | grep "undercloud" | awk ' { print $12 } ' | sed s/ctlplane=// )

echo $UNDERCLOUD_IP


11. seed vm 에서 overcloud ip 보기

. /root/tripleo/tripleo-overcloud-passwords

TE_DATAFILE=/root/tripleo/ce_env.json

. /root/tripleo/tripleo-incubator/undercloudrc

OVERCLOUD_IP=$(heat output-show overcloud KeystoneURL | cut -d: -f2 | sed s,/,,g )

# echo $OVERCLOUD_IP



[ OverCloud 내 VM 이 인터넷 연결이 될 수 있도록 수정]

0. DNS change (overcloud)

/etc/resolv.conf


1. security rule check (overcloud)



2. ip forward (host, seed, undercloud, overcloud)
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.ip_forward = 1


3. br-tun, br-int, br-ex up (host, seed, overcloud, compute)
ip link set br-tun up
ip link set br-ex up
ip link set br-int up


4. Host iptables NAT add
iptables -t nat -A POSTROUTING -s 192.0.8.0/21 ! -d 192.0.2.0/24 -j SNAT --to-source 172.23.147.187


5. Host iptables filter delete
iptables -D FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
iptables -D FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable



6. Host iptables NAT DNAT port change

# overcloud Horizon port forwarding

iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.0.2.21


# ALS port forwarding

iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.0.8.143




13. Host 에서 콘솔 접속할 수 있게 열기

# ssh 192.0.2.1 -R 443:<overcloud IP>:443 -L <laptop IP>:443:127.0.0.1:443

# ssh 192.0.2.1 -R 443:192.0.2.24:443 -L 172.23.147.187:443:127.0.0.1:443



14. connecting to the demo vm

# ssh debian@192.0.8.141



15. overcloud scheduler memory ratio 변경

# ssh heat-admin@192.0.2.23                  # overcloud-controllerMgmt

$ sudo su -

# vi /etc/nova/nova.conf

...

ram_allocation_ratio=100

...

# restart nova-scheduler

# exit


# 다른 over cloud 도 수정

# ssh heat-admin@192.0.2.27             # overcloud-controller0

# ssh heat-admin@192.0.2.28             # overcloud-controller1





16. monitoring 접속

http://<under cloud ip>/icinga           # icingaadmin / icingaadmin



17. undercloud logging 에 접속하기위해 Kibana 패스워드 알기

ssh heat-admin@<undercloud IP>

cat  /opt/kibana/htpasswd.cfg

http://<under cloud ip>:81                   # kibana / ?????




# vm 백업

# tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --save-vms


# vm Recover

# tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --resume-vms





[ HDP install ]


1. HP Development Platform Community Version 다운로드

https://helion.hpwsportal.com/catalog.html#/Home/Show


2. HDP 설치 문서

https://docs.hpcloud.com/helion/devplatform/install/community


* Host(base) 에서 설치하거나 Seed 에서 설치할 수 있음


3. Seed에서 필요 s/w 설치

# pip install cffi enum34 pyasn1 virtualenv

# scp -o StrictHostKeyChecking=no 192.0.2.21:/usr/local/share/ca-certificates/ephemeralca-cacert.crt /root/ephemeralca-cacert.crt


tar -zxvf hp_helion_devplatform_community.tar.gz

cd dev-platform-installer

# ./DevelopmentPlatform_Enable.sh \
    -u admin \
    -p bd9352ceed184839e2231d2a13062d461928b857 \     # admin-password
    -a 192.0.2.21 \                                                                           # overcloud ip
    -i c1821d8687f14fd4b74c11892f5d7af0 \                            # tenant-id
    -e /root/ephemeralca-cacert.crt \



3. Host(Base)에 필요 s/w 설치

# sudo apt-get install -y python-dev libffi-dev libssl-dev python-virtualenv python-pip

# mkdir -p hdp_work

# cd hdp_work

tar -zxvf /home/stack/Downloads/HDP/hp_helion_devplatform_community.tar.gz

cd dev-platform-installer

./DevelopmentPlatform_Setup.sh -p {admin_user_password} -a {auth_keystone_ip_address}

./DevelopmentPlatform_Setup.sh -p 2c0ee7b859261caf96a3069c60f516de1e3682c9 -a 192.0.2.21


혹은 아래와 같이 -n (username) -t (tenant name) 을 지정

# ./DevelopmentPlatform_Setup.sh -r regionOne -n admin -p 2c0ee7b859261caf96a3069c60f516de1e3682c9 -t admin -a '192.0.2.21'

# admin password 를 모를 경우 다음과 같이 실행

# cat /root/tripleo/tripleo-overcloud-passwords


# Keystone ip 를 모를 경우 다음과 같이 실행

# . /root/tripleo/tripleo-overcloud-passwords

# TE_DATAFILE=/root/tripleo/ce_env.json . /root/tripleo/tripleo-incubator/undercloudrc

# heat output-show overcloud KeystoneURL




5. cluster 설정을 위한 client tool 다운로드

http://docs.hpcloud.com/helion/devplatform/1.2/ALS-developer-trial-quick-start/2

cf-mgmt 와 ALS Client 다운로드

# host 에서 파일을 seed로 복사

$ unzip *.zip

$ scp helion-1.2.0.1-linux-glibc2.3-x86_64/helion root@192.0.2.1:client

$ scp linux-amd64/cf-mgmt root@192.0.2.1:client


# seed에서 수행

6. Create Cluster

$ vi ~/.profile

export PATH=$PATH:/root/client/cf-mgmt:/root/client/helion:.


$ cf-mgmt update









===========================   참고 ======================



5. VM을 위한 DNS 세팅

vi tripleo/hp_passthrough/overcloud_neutron_dhcp_agent.json

{"option":"dhcp_delete_namespaces","value":"True"},

{"option":"dnsmasq_dns_servers","value":"203.236.1.12,203.236.20.11"}


vi tripleo/hp_passthrough/undercloud_neutron_dhcp_agent.json

{"option":"dhcp_delete_namespaces","value":"True"},

{"option":"dnsmasq_dns_servers","value":"203.236.1.12,203.236.20.11"}



6. VM root disk 위치 수정

# mkdir -p /data/libvirt/images           # vm qcow2 이미지를 생성할 디렉토리 미리 생성

# vi /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh

...

IMAGES_DIR=${IMAGES_DIR:-"/data/libvirt/images"}    # 127 라인 디렉토리 변경

...


# virsh pool-dumpxml default > pool.xml


# vi pool.xml

<pool type='dir'>

  <name>default</name>

  <uuid>9690731d-e0d1-49d1-88a4-b25bccc78418</uuid>

  <capacity unit='bytes'>436400848896</capacity>

  <allocation unit='bytes'>2789785694208</allocation>

  <available unit='bytes'>18446741720324706304</available>

  <source>

  </source>

  <target>

    <path>/data/libvirt/images</path>

    <permissions>

      <mode>0711</mode>

      <owner>-1</owner>

      <group>-1</group>

    </permissions>

  </target>

</pool>


# virsh pool-destroy default

# virsh pool-create pool.xml



8. 아래 파일의 해당 라인의 IP 변경 : 192.0.8.0 -> 192.10.8.0,      192.0.15.0 -> 192.10.15.0

./tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh:800

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:70

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:71

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:72

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:181

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:182

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:183



# undercloud, overcloud 설치 시 변수 셋팅

# export OVERCLOUD_NEUTRON_DVR=False

# export OVERCLOUD_CINDER_LVMLOOPDEVSIZE=500000      # 필요시 필요한 양만큼


# seed locale 변경

# locale-gen en_US.UTF-8

# sudo dpkg-reconfigure locales          # 필요시



# 변수 세팅  (이건 Comm 버전에서 에러 날 때)

# vi ./tripleo/tripleo-incubator/scripts/hp_ced_setup_cloud_env.sh

...

export OVERCLOUD_CONTROLSCALE=${OVERCLOUD_CONTROLSCALE:-2}    40 라인 변경

...


13. vm dns 를 초반에 설정 못했을 때 변경하기

. /root/tripleo/tripleo-overcloud-passwords

TE_DATAFILE=/root/tripleo/ce_env.json

. /root/tripleo/tripleo-incubator/undercloudrc

# neutron subnet-list

neutron subnet-update --dns-nameserver 203.236.1.12 --dns-nameserver 203.236.20.11 c4316d44-e2ae-43fb-b462-40fa767bd9fb















반응형
Posted by seungkyua@gmail.com
,
반응형

1. tomcat 8.0 다운로드

http://tomcat.apache.org/download-80.cgi


2. 설치

$ sudo mkdir -p /usr/local

$ sudo mv ~/Downloads/apache-tomcat-8.0.23 /usr/local


3. symbolic link 삭제, 생성

sudo rm -f /Library/Tomcat

$ sudo ln -s /usr/local/apache-tomcat-8.0.23 /Library/Tomcat


4. 실행 세팅

$ sudo chown -R stephen /Library/Tomcat

$ sudo chmod +x /Library/Tomcat/bin/*.sh


5. start / stop

$ /Library/Tomcat/bin/startup.sh

$ /Library/Tomcat/bin/shutdown.sh


6. tomcat controller 다운로드

http://www.activata.co.uk/downloads/
















반응형
Posted by seungkyua@gmail.com
,
반응형


[ devstack 사전 조건 ]
devstack 설치 시 충분한 cpu, memory, disk 가 있어야 함.


[ localrc ]

VOLUME_BACKING_FILE_SIZE=70000M


[ rabbitmq memory leak 해결 ]
$ sudo vi /etc/rabbitmq/rabbitmq-env.conf
#celery_ignore_result = true

$ sudo service rabbitmq-server restart


[ cpu, memory overcommit ]
$ vi /etc/nova/nova.conf

scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 100.0
disk_allocation_ratio = 100.0



[ vm 에 할당되는 DNS 변경 ]
$ neutron subnet-list
$ neutron subnet-update <subnet> --dns_nameservers list=true 8.8.8.8 8.8.4.4



[ Ubuntu Server 14.04 Image Upload ]
이름 : Ubuntu Server 14.04 64-bit
경로 : http://uec-images.ubuntu.com/releases/14.04.2/14.04.2/ubuntu-14.04-server-cloudimg-amd64-disk1.img
포맷 : QCOW2 - QEMU Emulator
최소 디스크 : 5
최소 RAM : 1024

아래 사이트 이미지 참고
https://help.ubuntu.com/community/UEC/Images
http://uec-images.ubuntu.com/releases/



[ OpenStack API 서버 접근 가능한지 테스트 ]

[ ruby 설치 ]
$ sudo apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev
$ sudo apt-get install liblzma-dev zlib1g-dev
$ ruby -v
$ nokogiri --version

$ sudo gem install fog
$ vi .fog

:openstack:
  :openstack_auth_url:  http://192.168.230.141:5000/v2.0/tokens
  :openstack_api_key:   패스워드
  :openstack_username:  admin
  :openstack_tenant:    demo
  :openstack_region:    RegionOne # Optional

$ fog openstack
>>Compute[:openstack].servers



[ OpenStack Metadata 서버 접속 확인 ]
curl http://169.254.169.254

[ fog 로 user_data 넣기 ]
$ fog openstack
>> s = Compute[:openstack].servers.create(name: 'test', flavor_ref: , image_ref: , personality: [{'path' => 'user_data.json', 'contents' => 'test' }])



[ OpenStack API call 제한 여부 확인 ]
$ fog openstack
>> 100.times { p Compute[:openstack].servers }


[ 큰 Volume 생성 테스트 ] 
1. 30G Volume 생성
2. instance 에 Volume attach

[ Volume Attach가 안되면 tgtd 이 떠 있는지 확인 ]
sudo netstat -tulpn | grep 3260
sudo service tgt start

3. 추가 볼륨 포맷
$ sudo fdisk -l
$ sudo fdisk /dev/vdb

Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048): ENTER
Last sector, +sectors or +size{K,M,G} (2048-62914559, default 62914559): ENTER
Command (m for help): t
Partition number (1-4, default 1): 1
Hex code (type L to list codes): 83
Command (m for help): w

sudo mkfs.ext3 /dev/vdb1
$ sudo mkdir /disk
$ sudo mount -t ext3 /dev/vdb1 /disk
$ cd /disk
$ sudo touch pla





MicroBOSH 설치를 위한 OpenStack 설정 ] 
$ mkdir ~/my-micro-deployment
$ cd my-micro-deployment

[ Nova Client 준비 ]
$ sudo apt-get install python-novaclient
$ unset OS_SERVICE_TOKEN
$ unset OS_SERVICE_ENDPOINT
$ vi adminrc
export OS_USERNAME=admin
export OS_PASSWORD=imsi00
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://192.168.230.141:35357/v2.0

1. keypair 생성 : microbosh
nova keypair-add microbosh >> microbosh.pem
$ chmod 600 microbosh.pem

2. Security Group 생성 : bosh
name : bosh
description : BOSH Security Group

3. Security Rule 입력
Direction Ether Type IP Protocol Port Range Remote
Ingress  IPv4          TCP                1-65535     bosh
Ingress  IPv4          TCP                 25777     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP         25555     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP         25250     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP                 6868     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP              4222     0.0.0.0/0 (CIDR)
Ingress  IPv4          UDP      68                0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP               53         0.0.0.0/0 (CIDR)
Ingress  IPv4          UDP         53         0.0.0.0/0 (CIDR)
Egress  IPv4          Any             -             0.0.0.0/0 (CIDR)
Egress  IPv6          Any         -             ::/0 (CIDR)


4. Allocate Floating IP



MicroBOSH 설치 ] 
1. yml 작성

$ vi manifest.yml

name: microbosh

network:
  type: manual
  vip: 192.168.75.206       # Replace with a floating IP address
  ip: 10.0.0.15    # subnet IP address allocation pool of OpenStack internal network
  cloud_properties:
    net_id: a34928c6-9715-4a91-911e-a6822afd600b # internal network UUID

resources:
  persistent_disk: 20000
  cloud_properties:
    instance_type: m1.medium

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://192.168.230.141:35357/v2.0   # Identity API endpoint
      tenant: demo          # Replace with OpenStack tenant name
      username: admin    # Replace with OpenStack username
      api_key: 패스워드      # Replace with your OpenStack password
      default_key_name: microbosh   # OpenStack Keypair name
      private_key: microbosh.pem     # Path to OpenStack Keypair private key
      default_security_groups: [bosh]

apply_spec:
  properties:
    director: {max_threads: 3}
    hm: {resurrector_enabled: true}
    ntp: [time.bora.net, 0.north-america.pool.ntp.org, 1.north-america.pool.ntp.org]


2. Bosh cli 설치
$ sudo gem install bosh_cli --no-ri --no-rdoc
$ sudo gem install bosh_cli_plugin_micro --no-ri --no-rdoc


3. Download stemcell
https://bosh.io/stemcells

[ Ubuntu Server 14.04 stemcell 다운로드 ]
https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

$ curl -k -L -J -O https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

혹은,

$ wget --no-check-certificate --content-disposition https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

4. MicroBosh Deploy
$ bosh micro deployment manifest.yml
bosh micro deploy bosh-stemcell-2986-openstack-kvm-ubuntu-trusty-go_agent.tgz


5. MicroBosh Undeploy
bosh micro delete


6. MicroBosh Redeploy
bosh micro deploy --update bosh-stemcell-2986-openstack-kvm-ubuntu-trusty-go_agent.tgz





[ MicroBosh VM에 Cloud Foundry 설치 (Waden기반) ]
1. vm 접속
    - id : vcap / c1oudc0w
    - sudo su -


2. Ruby 설치 & Bosh cli 설치
$ apt-get update
$ apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev
$ apt-get install liblzma-dev zlib1g-dev

gem install bosh_cli --no-ri --no-rdoc -r
gem install bosh_cli_plugin_micro --no-ri --no-rdoc -r


* gem upgrade 방법
wget http://production.cf.rubygems.org/rubygems/rubygems-2.4.8.tgz
tar xvfz rubygems-2.4.8.tgz
$ cd rubygems-2.4.8
ruby setup.rb

* gem 리모트 소스 추가
gem sources --add http://rubygems.org/


3. go 설치
$ mkdir -p CloudFoundry
$ cd CloudFoundry
wget --no-check-certificate https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz
$ tar -C /usr/local -xzf go1.4.2.linux-amd64.tar.gz
$ mkdir -p /usr/local/gopath

$ vi ~/.profile
export GOPATH=/usr/local/gopath
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

$ . ~/.profile
$ apt-get install git
go get github.com/cloudfoundry-incubator/spiff


4. Cloud Foundry 소스 다운로드
git clone https://github.com/cloudfoundry/cf-release.git
cd cf-release
$ ./update


5. Cloud Foundry Manual 설치
bosh target 192.168.75.206
   admin / admin


* /tmp 디렉토리 용량 확장
$ 추가 디스크 증설
mkfs.ext3 /dev/vdc
$ mkdir -p /tmp2
mount -t ext3 /dev/vdc /tmp2
$ mount --bind /tmp2 /tmp
$ chown root.root /tmp
$ chmod 1777 /tmp

* mount 취소
$ umount /tmp


bosh upload release releases/cf-212.yml


cp spec/fixtures/openstack/cf-stub.yml .





[ Bosh lite 설치 (Waden기반) on Mac ]
1. install vagrant
http://www.vagrantup.com/downloads.html


2. bosh lite 다운로드
$ git clone https://github.com/cloudfoundry/bosh-lite
$ cd bosh-lite


3. install VirtualBox
https://www.virtualbox.org/wiki/Downloads


4. start vagrant
vagrant up --provider=virtualbox


5. Target the Bosh Director and login admin/admin
bosh target 192.168.50.4 lite
$ bosh login


6. HomeBrew 및 Spiff 설치
xcode-select --install
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew tap xoebus/homebrew-cloudfoundry
brew install spiff


7. cloud foundry 설치
git clone https://github.com/cloudfoundry/cf-release
$ cd cf-release
$ ./update


8. Single Command Deploy
$ cd ~/CloudFoundry/bosh-list
$ ./bin/provision_cf


9. route 추가
$ ./bin/add-route


10. VM Restart 후에 container restart
$ bosh cck
Choose "2" to create missing VM:
"yes"




[ simple go webapp Deploy ]

$ ssh vcap@192.168.50.4     # password : c1oudc0w

$ bosh vms

$ cf api --skip--ssl-validation https://api.192.168.0.34.xip.io           # ha-proxy ip


$ cf login

Email> admin

Password> admin


$ cf create-org test-org

$ cf target -o test-org

$ cf create-space development

$ cf target -o test-org -s development


$ sudo apt-get update

$ sudo apt-get install git

$ sudo apt-get install golang

cf update-buildpack go_buildpack

git clone https://github.com/cloudfoundry-community/simple-go-web-app.git

$ cd simple-go-web-app

### 다른 buildpack https://github.com/cloudfoundry/go-buildpack.git

$ cf push simple-go-web -b https://github.com/kr/heroku-buildpack-go.git

$ cf apps             # app list

$ cf logs simple-go-web --recent     # app deploy log


# cf login

$ cf login -a http://api.10.244.0.34.xip.io -u admin -o test-org -s development















반응형
Posted by seungkyua@gmail.com
,

PyCharms 설정

OpenStack 2015. 6. 9. 18:09
반응형

Mac 에서


1. virtualenv 설치하기

$ sudo pip install virtualenv


2. ffi.h 헤더 설치하기

$ xcode-select --install


3. virtualenv 생성

$ virtualenv .venv


4. virtualenv 로 들어가기

$ . ./.venv/bin/activate


5. 설치하기

(.venv)$ pip install -r requirements.txt


6. 환경 복사하기

(.venv)$ pip freeze > requirements.txt













반응형
Posted by seungkyua@gmail.com
,
반응형



I. 가슴에 와 닿는다.


1. 내안의 대가를 깨워라

   - 당신을 가슴 뛰게 하는 것은 무엇인가? 그것을 따르면 청중도 공감한다.

   - 사업적 열정의 정의 : 한 개인으로서 당신에게 깊은 의미가 있는 뭔가를 위해 경험하는 긍정적이고 강렬한 느낌

                                     한 사람의 자기 정체성에서 핵심이 되는 무엇이다.

   - 투자자가 새로운 벤처 기업의 투자 잠재력을 평가할 때 인지된 열정이 격차를 벌린다. (CEO의 열정이 투자받는 포인트)

   - 강연 시 이야기를 하고 열정을 표현하라.

   - 열정은 전염된다.

   - 누군가를 도울려면 말하기 보다 들어야 한다. (아프라카 토마토 재배, 하마떼)


2. 스토리텔링의 기술

   - 듣는 이의 가슴과 정신에 닿는 이야기를 하라.

   - 젱체성에는 힘이 있다. 올바른 정체성을 세우면 주변 사람들이 말이 안 된다고 여기는 것도 말이 되게 할 수 있다.(52세까지 술안마심)

   - 이야기(줄거리)는 벽을 허문다. (누구나 공감하는 가족 이야기)

   - 사람의 마음을 녹일 이야기가 꼭 있어야 한다. 그래야 그들이 마음을 열고 우리 얘기에 귀를 기울이기라도 한다.

   - 에토스(ethos) 10% : 신뢰성, 인정할 만한 성과나 멋진 직함,   로고스(logos) 25% : 통계, 파토스(pathos) 65% : 감정

   - 이야기의 종류 : 주제와 직접 연관된 개인적 이야기, 교훈을 얻은 다른 사람의 이야기, 상품이나 브랜드의 성공 혹은 실패담

   - 호기심은 지식의 공백을 느낄 때 발생한다.고통스러울 정도로 끔찍한 영화도 참고 본다. 결말을 모르는게 더 큰 고통일 수 있기때문

   - 대중은 자신이 원하는 게 뭔지 모른다는 것이다.

   - 개인의 "영웅적 이야기"에 성공적인 브랜드의 이야기를 엮으면 성공한다. 청중은 응원할 대상을 찾는다.

   - 영웅과 악당을 등장스켜라 (신데렐라 이야기 처럼)


3. 대화를 합시다.

   - 말하는 내용을 완전히 소화하라. 치열하게 연습하라. 그래야 친한 친구 사이의 대화 같은 편안한 전달이 가능하다.

   - 대화하듯 말하라. 말하는 속도가 평소의 대화하는 속도이여야 한다.

반응형
Posted by seungkyua@gmail.com
,

OpenStack Manual build

OpenStack 2015. 3. 9. 15:40
반응형

0. 환경 세팅

$ sudo easy_install --upgrade transifex-client

sudo apt-get install gnome-doc-utils


- transifex 환경 설정 파일 저장

$ vi ~/.transifexrc

[https://www.transifex.com]
hostname = https://www.transifex.com
password = 오픈스택패스워드
token =
username = skanddh@gmail.com

[https://www.transifex.net]
hostname = https://www.transifex.com
password = 오픈스택패스워드
token =
username = skanddh@gmail.com


1. git 으로 다운받기


2. transifex 로 부터 전체 최신 번역 다운받기

$ tx pull -f -l ko_KR

 

# install-guide 만 다운받고 싶을 때

$ tx pull -f -l ko_KR -r openstack-manuals-i18n.install-guide


2-1. 한글 폰트 설치

$ git clone https://github.com/stackforge/clouddocs-maven-plugin

cd clouddocs-maven-plugin/src/main/resources/fonts/

$ mkdir -p nanum-font

wget http://cdn.naver.com/naver/NanumFont/fontfiles/NanumFont_TTF_ALL.zip

unzip NanumFont_TTF_ALL.zip -d ~/Git/clouddocs-maven-plugin/src/main/resources/fonts/nanum-font/

clouddocs-maven-plugin/src/main/resources/cloud/fo

$ vi docbook.xsl


  <xsl:param name="bodyFont">

    <xsl:choose>

      <xsl:when test="starts-with(/*/@xml:lang, 'zh')">AR-PL-New-Sung</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ja')">TakaoGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko')">NanumGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko_KR')">NanumGothic</xsl:when>

      <xsl:otherwise>CartoGothic Std</xsl:otherwise>

    </xsl:choose>

  </xsl:param>


  <xsl:param name="monospace.font.family">

    <xsl:choose>

      <xsl:when test="$monospaceFont != ''"><xsl:value-of select="$monospaceFont"/></xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'zh')">AR-PL-New-Sung</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ja')">TakaoGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko')">NanumGothic</xsl:when>

      <xsl:when test="starts-with(/*/@xml:lang, 'ko_KR')">NanumGothic</xsl:when>

      <xsl:otherwise>monospace</xsl:otherwise>

    </xsl:choose>

  </xsl:param>


$ cd clouddocs-maven-plugin

$ vi pom.xml

...

<version>2.1.5-SNAPSHOT</version>

...

$ mvn clean install


3. install-guide 를 한국어 빌드하기

$ vi .tx/config


...

[openstack-manuals-i18n.common]

file_filter = doc/common/locale/<lang>.po

minimum_perc = 8

source_file = doc/common/locale/common.pot

source_lang = ko_KR

type = PO


[openstack-manuals-i18n.install-guide]
file_filter = doc/install-guide/locale/<lang>.po
minimum_perc = 75
source_file = doc/install-guide/locale/install-guide.pot
source_lang = ko_KR                # en -> ko_KR 로 변경
type = PO

...


$ cd doc/install-guide

$ mvn clean generate-sources

 

 

4. 3번이 안될 때 tox 로 한국어 빌드하기


- XML 파일을 읽어서 PO Template 파일(POT)로 만들기

./tools/generatepot doc/install-guide/


- POT Template 파일 각 언어에 맞게 PO 파일 만들기 (Transifex 에 올려서 번역 작업)


- generation 할 파일 폴더 선택

$ sudo pip install tox

$ tox -e py27

$ source .tox/py27/bin/activate

 

(py27) $ vi doc-tools-check-languages.conf


# 빌드시에 common 은 반드시 필요함

declare -A DIRECTORIES=(
["fr"]="common glossary user-guide image-guide"
["ja"]="common glossary image-guide install-guide user-guide user-guide-admin"
["pt_BR"]="common install-guide"
["zh_CN"]="common glossary arch-design image-guide"
["ko_KR"]="common install-guide"
)

# books to be built
declare -A BOOKS=(
["fr"]="user-guide image-guide"
["ja"]="image-guide install-guide user-guide user-guide-admin"
["pt_BR"]="install-guide"
["zh_CN"]="arch-design image-guide"
["ko_KR"]="install-guide"
)


# common 모듈의 ko_KR.po 파일이 만드시 있어야 함

(py27) $ cd doc/common/locale

(py27) $ cp ja.po ko_KR.po


- tox 로 번역된 메세지와 통합하여 DocBook 을 generate 한다. (generated 폴더 생성) 

(py27) $ tox -e checkniceness - to run the niceness tests (for example, to see extra whitespaces)

(py27) $ tox -e checksyntax - to run syntax checks
(py27) $ tox -e checkdeletions - to check that no deleted files are referenced
(py27) $ tox -e checkbuild - to actually build the manual(s). This will also generate a directory publish-docs that contains the built files for inspection. You can also use doc/local-files.html for looking at the manuals.
(py27) $ tox -e checklang - to check all translated manuals

 

혹은


$ tox -e buildlang -- ko_KR


- generated 폴더에서 PDF 파일 만들기

(py27) $ cd generated/ko_KR

(py27) $ vi pom.xml

 

  <modules>
    <!--module>admin-guide-cloud</module>
    <module>arch-design</module>
    <module>cli-reference</module>
    <module>config-reference</module>
    <module>glossary</module>
    <module>hot-reference</module>
    <module>image-guide</module-->
    <module>install-guide</module>
    <!--module>user-guide</module>
    <module>user-guide-admin</module-->
  </modules>

...

  <build>

    <plugins>

      <plugin>

        <groupId>com.rackspace.cloud.api</groupId>

        <artifactId>clouddocs-maven-plugin</artifactId>

        <!--version>2.1.3</version-->

        <version>2.1.5-SNAPSHOT</version>

      </plugin>

    </plugins>

  </build>


(py27) $ cd install-giude 

(py27) $ mvn clean generate-sources




[ RST 파일로 만들 경우 ]


1. Sphinx 설치

$ sudo pip install Sphinx


2. 각각의 RST 파일로 각각의 POT 파일 만들기 (Slicing)

sphinx-build -b gettext doc/playground-user-guide/source/ doc/playground-user-guide/source/locale/


3. 각각의 POT 파일을 하나의 POT 파일로 통합하기 (Slicing)

msgcat doc/playground-user-guide/source/locale/*.pot > doc/playground-user-guide/source/locale/playground-user-guide.pot


4. POT 파일을 Transifex에 올려서 언어에 맞게 PO 파일을 만들기 (번역 작업)

   - 이 작업은 commit 이 일어나면 jenkins 가 알아서 작업 해 줌

$ tx set --auto-local -r openstack-i18n.playground-user-guide "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/playground-user-guide.po"  --source-lang en --source-file doc/playground-user-guide/source/locale/playground-user-guide.pot -t PO --execute

$ tx push -s


5. 다운로드

tx set --auto-local -r openstack-i18n.playground-user-guide] "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/playground-user-guide.po"  --source-lang en --source-file doc/playground-user-guide/source/locale/playground-user-guide.pot -t PO --execute

$ tx pull -l ko_KR


6. build HTML


- 통합된 PO 파일을 작은 PO 파일로 나눔

msgmerge -o doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/A.po doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/playground-user-guide.po doc/playground-user-guide/source/locale/A.pot


- 작은 PO 파일을 작은 MO 파일로 변환

msgfmt "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/A.po" -o "doc/playground-user-guide/source/locale/ko_KR/LC_MESSAGES/A.mo"


- build HTML

sphinx-build -D "language='ko_KR' doc/playground-user-guide/source/ doc/playground-user-guide/build/html



* 소스 코드로 부터 PO 파일로 추출하기

$ pybabel extract -o nova/locale/nova.pot nova/



반응형
Posted by seungkyua@gmail.com
,
반응형

1. 제일 중요한 것은 포기하지 않는 것이다.

   - 한 사람이 창업이라는 길을 선택했다면 계속 그 길을 걸어가야 한다.


2. 마케팅을 할려면 유명인사를 끌여들어야 한다.

   - 마윈은 인터넷 컨퍼런스를 성공시키지 위해서 진융을 사회자로 했고 유명한 회사대표들을 불렀다.


3. 사명감은 기업발전의 원동력이다.

   - GE : 세상을 밝게 만드는 것

   - 디즈니랜드 : 모든 사람들을 즐겁게 만드는 것

   - 도요타 : 가장 좋은 서비스를 제공하는 것

   - 알리바바 : 세상에 어려운 거래는 없다.

반응형
Posted by seungkyua@gmail.com
,
반응형

1. 우선 그건 축구라고 불리는 거야.

For one thing, it's called soccer.


2. 너네 팀은 시즌 내내 2골 밖에 못 넣었으니, 잡담해도 상관없지. (큰 위험을 무릅쓴 것 도 아니지.)

Your team's scored two goals all season. I'm not taking a big risk.


3. 책 제목 하나도 못 댄게 전 더 걱정되네요.

I'd be more worried that she couldn't come up with a single book title.


4. 일본식 꽃꽂이 아주 능숙하다.

I'm quite adept at Japanese flower arrangement.


5. 밀린 스크랩랩 북을 만들던가. (일을 하다.)

Maybe catch up on scrapbooking.


6. 일부러 강조할려고 두번씩 말하지마. (주장을 증명할려고)

Don't do the "double question to prove a point" thing.


7. 아마 루크에게 먼저 시비걸었을 거야. (비난하다)

He probably jumped on Luke and Luke just fought back.


8. 그나마 다행이군.

Could have been worse.


9. 운동장에서 서로 욕하고 밀쳤다.   (욕하다,  밀치다)

Apparently, there was some name-calling and shoving on the playgound.


10. 이것 때문에 싸운거니? (이게 다야?)

Is that what this all about?


11. 짐작하건데 다들 가족인거 같다.

I'm getting the sense that you're all related...


12. 의젓한 애들은 이런 짓 안해.

This is not how mature young men behave.


13. 그들을 바베큐 파티에 (초대한 것을) 취소할까 생각한다.

I think we should cancel with them for the barbecue.


14. 그냥 묻어둬  (감추다, 비밀로 하다)

Just sweep it under the rug.



a bad liar    : 거짓말을 잘 못하는

Hamburglar : 수감자

starting offensive lineman : (미식축구에서) 주전 라인맨

Kid's a menace : 개구쟁이, 문제아

knucklehead : (비격식) 얼간이, 멍청이

work this out : 잘 해결하다.



반응형
Posted by seungkyua@gmail.com
,