반응형

 0. 서버 설정

Master   : 192.168.75.211  (etcd, kube-apiserver, kube-controller-manager, kube-scheduler)

Node01  : 192.168.75.212  (kube-proxy, kubelet)

Node02  : 192.168.75.213  (kube-proxy, kubelet)


etcd-2.2.1, flannel-0.5.5, k8s-1.1.2



[ Master Node 서버에 모두 설치 ]

1. apt-get 으로 필요 s/w 설치

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

#deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

#deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

#deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

#deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

#deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

#deb https://apt.dockerproject.org/repo ubuntu-wily main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo service docker restart



2. sudo 세팅

# gpasswd -a stack sudo   (이건 안되는데??)

stack   ALL=(ALL:ALL) NOPASSWD: ALL



3. ntp 설치 & ssh 키 설치

# ssh 로 master <-> Node 사이에 stack 계정으로 바로 접속할 수 있어야 함

# ssh 로 master, Node 각각 자기 서버 내에서 stack 계정에서 root 계정으로 바로 접속할 수 있어야 함



4. host 세팅

192.168.75.211    master

192.168.75.212    node01

192.168.75.213    node02



5. Go 설치

1. 다운로드

$ cd /home/stack/downloads

wget https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz

sudo tar -C /usr/local -xzf go1.5.2.linux-amd64.tar.gz


2. 환경변수 세팅

sudo vi /etc/profile

export GOROOT=/usr/local/go

export PATH=$PATH:/usr/local/go/bin


sudo visudo             # sudo 에서도 go path가 적용될려면 여기에 세팅

Defaults    env_reset

Defaults    env_keep += "GOPATH"

Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin"


$ cd

vi .bash_profile

export GOPATH=$HOME/Documents/go_workspace:$HOME/Documents/go_workspace/src/k8s.io/kubernetes/Godeps/_workspace

export PATH=$HOME/Documents/go_workspace/bin:$PATH



6. kubernetes 설치

# go 로 다운로드하기

$ go get k8s.io/kubernetes   # git clone https://github.com/kubernetes/kubernetes.git


$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes

$ git checkout -b v1.1.2 tags/v1.1.2

$ make all                                      # _output 디렉토리에 결과 파일이 생성


# 소스 수정 후 make 로 재빌드 (참고)   _output 디렉토리에 결과 파일이 생성

$ make all WHAT=plugin/cmd/kube-scheduler GOFLAGS=-v      # scheduler

$ make all WHAT=cmd/kubelet GOFLAGS=-v                           # kubelet

$ make all WHAT=cmd/kube-apiserver GOFLAGS=-v                # apiserver


# 소스 수정 후 재빌드 (참고)

$ hack/build-go.sh                  # make를 돌리면 build-go.sh 가 수행됨
$ hack/local-up-cluster.sh        # 로컬 클러스터를 생성할 때


$ sudo su -

# cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu


# export KUBE_VERSION=1.1.2

# export FLANNEL_VERSION=0.5.5

# export ETCD_VERSION=2.2.1


# ./build.sh                 # binaries 디렉토리에 다운 받음

# exit



$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu

$ vi config-default.sh


export nodes="stack@192.168.75.211 stack@192.168.75.212"

export role="a i"

export NUM_MINIONS=${NUM_MINIONS:-1}

export SERVICE_CLUSTER_IP_RANGE=192.168.230.0/24

export FLANNEL_NET=172.16.0.0/16



ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"

DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.230.10"}

DNS_DOMAIN="cluster.local"

DNS_REPLICAS=1


ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"


$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh


# 복사한 파일

make-ca-cert.sh    

reconfDocker.sh    

config-default.sh    

util.sh    

kube-scheduler.conf    

kube-apiserver.conf    

etcd.conf    

kube-controller-manager.conf    

flanneld.conf    

kube-controller-manager    

kube-scheduler    

etcd    

kube-apiserver    

flanneld    

kube-controller-manager    

etcdctl    

kube-scheduler    

etcd    

kube-apiserver    

flanneld



# kubectl 복사

$ sudo cp ubuntu/binaries/kubectl /opt/bin/.


# 경로 추가

$ vi ~/.bash_profile

export PATH=/opt/bin:$PATH

export KUBECTL_PATH=/opt/bin/kubectl



# Add-on 설치

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu

$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh


# 에러 발생하면 아래 실행 (Docker image 를 다운로드 함)

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes

./build/run.sh hack/build-cross.sh


# Add-on 설치 다시

cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu

$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh



[ Kubernetes 설치 지우기 ]

$ cd ..

$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh


# node01 에 떠 있는 docker 삭제하기

docker ps -a | awk '{print $1}' | xargs docker stop

docker ps -a | awk '{print $1}' | xargs docker rm

$ sudo cp ubuntu/binaries/kubectl /opt/bin/.                # kubectl 을 /opt/bin 에 복사해야 함


$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh



[ Master의 Docker를 flannel 로 연결 ]

sudo service docker stop

$ sudo ip link set dev docker0 down

$ sudo brctl delbr docker0

$ cat /run/flannel/subnet.env      # flannel의 subnet 과 mtu 값을 확인한다.

$ sudo vi /etc/default/docker

DOCKER_OPTS=" -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.25.1/24 --mtu=1472"


$ sudo service docker start

$ sudo ip link set dev docker0 up



# node01 에서 docker ps -a 로 가비지가 많이 쌓임. 지워주면 됨

# ssh node01 로 접속하여 가비지 조회

docker ps -a | grep Exited | awk '{print $1}'

docker ps -a | grep Exited | awk '{print $1}' | xargs docker rm


# kubernetes volume 생성되는 곳 : /var/lib/kubelet/pods

# kubernetes garbage-collection https://github.com/kubernetes/kubernetes/blob/master/docs/admin/garbage-collection.md


$ kubectl get nodes

$ kubectl get pods --namespace=kube-system         # add-on pods 확인

$ kubectl cluster-info


# Skydns Pod 정보 보기

kubectl describe pod kube-dns-v9-549av --namespace=kube-system


# DNS 확인

$ kubectl create -f busybox.yaml


$ vi busybox.yaml

apiVersion: v1

kind: Pod

metadata:

  name: busybox

  namespace: default

spec:

  containers:

  - image: busybox

    command:

      - sleep

      - "3600"

    imagePullPolicy: IfNotPresent

    name: busybox

  restartPolicy: Always


$ kubectl get pods busybox


kubectl exec Pod명 [-c Container명] -i -t -- COMMAND [args..] [flags]

$ kubectl exec busybox -- nslookup kubernetes.default


# busybox 삭제하기

$ kubectl delete -f busybox.yaml



# 웹화면 확인

http://192.168.75.211:8080/


# UI 확인

http://192.168.75.211:8080/ui    >> 아래 화면으로 리다이렉션 됨

http://192.168.75.211:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui



# Mac 에서 소스 개발하고 Master 에 소스 커밋하기  (참고)

# 원격에 tag 에도 v1.1.2 가 있고 branch 에도 v1.1.2 가 있으면 remote branch 를 지정

# git push [저장소] (local branch명:)remote branch명

git push origin refs/heads/v1.1.2


git config --global user.name "Seungkyu Ahn" 

git config --global user.email "seungkyua@gmail.com"


# 로컬 파일을 Master 서버로 복사

$ vi ~/bin/cmaster.sh

#!/bin/bash


function change_directory {

  cd /Users/ahnsk/Documents/go_workspace/src/k8s.io/kubernetes

}


change_directory

files=$(git status | grep -E 'modified|new file' | awk -F':' '{print$2}')


for file in $files; do

    scp $file stack@192.168.230.211:/home/stack/Documents/go_workspace/src/k8s.io/kubernetes/$file

done



# kube-apiserver 소스로 띄우기

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cmd/kube-apiserver


sudo -E go run apiserver.go --insecure-bind-address=0.0.0.0 --insecure-port=8080 --etcd-servers=http://127.0.0.1:4001 --logtostderr=true --service-cluster-ip-range=192.168.230.0/24 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,DenyEscalatingExec,SecurityContextDeny --service-node-port-range=30000-32767 --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key 



# Document 만들기

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cmd/genkubedocs

$ mkdir -p temp

$  go run gen_kube_docs.go temp kube-apiserver



7. Sample App 올려보기

https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook


# 디렉토리 위치는 kubernetes 설치한 위치

$ sudo kubectl create -f examples/guestbook/redis-master-controller.yaml

$ sudo kubectl get rc

$ sudo kubectl get pods

$ sudo kubectl describe pods/redis-master-xssrd

$ sudo kubectl logs <pod_name>          # container log 확인


$ sudo kubectl create -f examples/guestbook/redis-master-service.yaml

$ sudo kubectl get services


$ sudo kubectl create -f examples/guestbook/redis-slave-controller.yaml

$ sudo kubectl get rc

$ sudo kubectl get pods


$ sudo kubectl create -f examples/guestbook/redis-slave-service.yaml

$ sudo kubectl get services



$ sudo kubectl create -f examples/guestbook/frontend-controller.yaml

$ sudo kubectl get rc

$ sudo kubectl get pods



$ sudo kubectl create -f examples/guestbook/frontend-service.yaml

$ sudo kubectl get services





sudo kubectl describe services frontend

$ sudo kubectl get ep


# dns 보기

$ sudo kubectl get services kube-dns --namespace=kube-system


# 환경변수 보기

$ sudo kubectl get pods -o json

$ sudo kubectl get pods -o wide

$ sudo kubectl exec frontend-cyite -- printenv | grep SERVICE


8. Sample App 삭제

$ sudo kubectl stop rc -l "name in (redis-master, redis-slave, frontend)"

$ sudo kubectl delete service -l "name in (redis-master, redis-slave, frontend)"



# Network

TAP : vm과 eth0 (physical port) 와 연결할 때 사용. tap <-> bridge <-> eth0 로 됨

VETH : docker <-> bridge,  docker <-> OVS, bridge <-> OVS 를 연결할 때 사용


# interconnecting namespaces

http://www.opencloudblog.com/?p=66



# Docker <-> veth 알아내기

$ vi veth.sh


#!/bin/bash


set -o errexit

set -o nounset

#set -o pipefail


VETHS=`ifconfig -a | grep "Link encap" | sed 's/ .*//g' | grep veth`

DOCKERS=$(docker ps -a | grep Up | awk '{print $1}')


for VETH in $VETHS

do

  PEER_IFINDEX=`ethtool -S $VETH 2>/dev/null | grep peer_ifindex | sed 's/ *peer_ifindex: *//g'`

  for DOCKER in $DOCKERS

  do

    PEER_IF=`docker exec $DOCKER ip link list 2>/dev/null | grep "^$PEER_IFINDEX:" | awk '{print $2}' | sed 's/:.*//g'`

    if [ -z "$PEER_IF" ]; then

      continue

    else

      printf "%-10s is paired with %-10s on %-20s\n" $VETH $PEER_IF $DOCKER

      break

    fi

  done

done






반응형
Posted by seungkyua@gmail.com
,
반응형

$ cd /opt

$ sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u20-b26/jdk-8u20-linux-x64.tar.gz"


$ sudo tar -zxvf jdk-8u20-linux-x64.tar.gz


$ sudo update-alternatives --install /usr/bin/java java /opt/jdk1.8.0_20/bin/java 2


$ sudo update-alternatives --config java


There are 2 choices for the alternative java (providing /usr/bin/java).


  Selection    Path                                            Priority   Status

------------------------------------------------------------

* 0            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      auto mode

  1            /opt/jdk1.8.0_20/bin/java                                 2         manual mode

  2            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      manual mode


Press enter to keep the current choice[*], or type selection number: 1



$ sudo update-alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_20/bin/javac 2

$ sudo update-alternatives --config javac



$ sudo update-alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_20/bin/jar 2

$ sudo update-alternatives --config jar


$ sudo vi .bashrc


export JAVA_HOME=/opt/jdk1.8.0_20

export JRE_HOME=/opt/jdk1.8.0_20/jre

export PATH=$PATH:/opt/jdk1.8.0_20/bin:/opt/jdk1.8.0_20/jre/bin


$ echo $JAVA_HOME

$ echo $JRE_HOME






반응형
Posted by seungkyua@gmail.com
,
반응형

1화 Sand Hill Shuffle

2화 Runaway Devaluation

3화 Bad Money

4화 The Lady

5화 Server Space

6화 Homicide

7화 Adult Content

8화 White Hat / Black Hat

9화 Binding Arbitration

10화 Two Days of the Condor











반응형
Posted by seungkyua@gmail.com
,

docker ssh + git

Container 2015. 8. 13. 15:20
반응형

1. docker 설치하기

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

#deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

#deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

#deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

#deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

#deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

#deb https://apt.dockerproject.org/repo ubuntu-wily main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

$ sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo service docker restart


# Mac 에서 Docker 설치하기

$ ruby -e \

"$(curl -fsSL \ https://raw.githubusercontent.com/Homebrew/install/master/install)"


$ brew update

$ brew install caskroom/cask/brew-cask


$ brew cask install virtualbox

$ brew install docker

$ brew install boot2docker


$ boot2docker init

$ boot2docker up


To connect the Docker client to the Docker daemon, please set:

    export DOCKER_HOST=tcp://192.168.59.103:2376

    export DOCKER_CERT_PATH=/Users/ahnsk/.boot2docker/certs/boot2docker-vm

    export DOCKER_TLS_VERIFY=1


$ $(boot2docker shellinit)       # 환경변수 세팅


$ docker info

$ boot2docker ssh                 # vm 접속

$ boot2docker ip                   # vm ip


$ docker run --rm -ti ubuntu:latest /bin/bash        # ubuntu 이미지 테스트

$ docker run --rm -ti fedora:latest /bin/bash         # fedora 이미지 테스트

$ docker run --rm -ti centos:latest /bin/bash         # centos 이미지 테스트


# Upgrade the Boot2docker VM image

$ boot2docker stop

$ boot2docker download

$ boot2docker up


$ boot2docker delete


# Docker Hub 로그인

$ docker login


Username: seungkyua

Password: 

Email: seungkyua@gmail.com


$  cat ~/.docker/config.json


$ docker logout


# Docker Registry 를 insecure 로 변경


# boot2docker

sudo touch /var/lib/boot2docker/profile

$ sudo vi /var/lib/boot2docker/profile

EXTRA_ARGS="--insecure-registry 192.168.59.103:5000"

sudo /etc/init.d/docker restart


# Ubuntu

$ sudo vi /etc/default/docker

DOCKER_OPTS="--insecure-registry 192.168.59.103:5000"

$ sudo service docker restart


# Fedora

$ sudo vi /etc/sysconfig/docker

OPTIONS="--insecure-registry 192.168.59.103:5000"

$ sudo systemctl daemon-reload

$ sudo systemctl restart docker


# CoreOS

$ sudo cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

$ sudo vi  /etc/systemd/system/docker.service

ExecStart=/usr/lib/coreos/dockerd --daemon --host=fd:// \

$DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ \

--insecure-registry 192.168.59.103:5000

$ sudo systemctl daemon-reload

$ sudo systemctl restart docker


# Local Registry 띄우기

$ sudo mkdir -p /var/lib/registry

$ docker run -d -p 5000:5000 \

-v /var/lib/registry:/var/lib/registry \

--restart=always --name registry registry:2



# 테스트

$ docker pull ubuntu

$ docker tag ubuntu 192.168.59.103:5000/ubuntu


$ docker push 192.168.59.103:5000/ubuntu

$ docker pull 192.168.59.103:5000/ubuntu


$ docker stop registry

$ docker rm -v registry




2. docker file 만들기

# mkdir docker

# cd docker

# mkdir git-ssh

# cd git-ssh

# vi Dockerfile

FROM ubuntu:14.04


RUN apt-get -y update

RUN apt-get -y install openssh-server

RUN apt-get -y install git


# Setting openssh

RUN mkdir /var/run/sshd

RUN sed -i "s/#PasswordAuthentication yes/PasswordAuthentication no/" /etc/ssh/sshd_config


# Adding git user

RUN adduser --system git

RUN mkdir -p /home/git/.ssh


# Clearing and setting authorized ssh keys

RUN echo '' > /home/git/.ssh/authorized_keys

RUN echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTFEBrNfpSIvgz7mZ+I96/UqKFCxcouoiDDS9/XPNB1Tn7LykgvHHaR5mrPOQIJ/xTFhSVWpwsmEvTLdv3QJYLB5P+UfrjY5fUmiYgGpKKr5ym2Yua2wykHgQYdT4+lLhyq3BKbnG9vgc/FQlaCWntLckJfAYnHIGYWl1yooMAOka0/pOeJ+hPF0TxLQtrjoVJWiaHLVnB8qgPiCgvSyKROvW6cs1AhY9abasUWrQ5eNsLLMY1rDWccantMjVlcUdDZuPzI4g+/MtfE3IAs7JxtmwMvCMFRMuzWTtZkZSVyqpEGDeLnPGgMNTYUwaxQhlJLtcYnNTqdyZr8ZCcz3zP stephen@Stephenui-MacBook-Pro.local' >> /home/git/.ssh/authorized_keys


# Updating shell to bash

RUN sed -i s#/home/git:/bin/false#/home/git:/bin/bash# /etc/passwd


EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]

docker build -t git-ssh-img .

docker run --name git-ssh -d -p 1234:22 git-ssh-img


3. docker container bash로 접속

docker run -i -t --rm --net='host' ubuntu:14.04 bash


3. docker container 접속

docker exec -it <containerIdOrName> bash


4. docker 모든 컨테이너 보기

# docker ps -a


5. 모든 컨테이너 삭제

docker ps -a | awk '{print $1}' | grep -v CONTAINER | xargs sudo docker rm


6. docker 모든 <none> 이미지 삭제

docker images | grep "<none>" | awk '{print $3}' | xargs sudo docker rmi


7. 이미지 조회 및 실행

$ sudo docker search ubuntu

sudo docker run --name myssh -d -p 4444:22 rastasheep/ubuntu-sshd


8. stack 사용자 docker 그룹 권한 추가

$ sudo usermod -aG docker stack

$ sudo service docker restart

$ 재로그인


9. docker 이미지 가져오기

$ docker pull ubuntu:lates


10. docker bash쉘로 실행 및 빠져나오기기

docker run -i -t --name hello ubuntu /bin/bash

root@bb97e5f57596:/#


Ctrl + p, Ctrl + q        => 멈추지 않고 빠져나오기


$ docker attach hello            => 다시 접속하기 (enter를 한번 쳐야 함)


11. nginx 설치하기

# mkdir data


# vi Dockerfile

FROM ubuntu:14.04.3


RUN apt-get update

RUN apt-get install -y nginx

RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf

RUN chown -R www-data:www-data /var/lib/nginx


VOLUME ["/data", "/etc/nginx/site-enabled", "/var/log/nginx"]


WORKDIR /etc/nginx


CMD ["nginx"]


EXPOSE 80

EXPOSE 443


# docker build -t nginx:0.1 .

docker run --name hello-nginx -d -p 2080:80 -v /root/data:/data nginx:0.1



11. 파일 꺼내서 보기

# docker cp hello-nginx:/etc/nginx/nginx.conf ./


12. 컨테이러를 이미지로 생성

# docker commit -a "aaa <aaa@aaa.com>" -m "Initial commit" hello-nginx nginx:0.2


13. 이미지와 컨테이너 변경사항 보기

# docker diff 컨테이너ID

# docker history 이미지ID


14. 컨테이너 내부 보기

# docker inspect hello-nginx


15. docker 컨테이너의 pid 알아내기

docker inspect -f '{{.State.Pid}}' containerID


16. Docker 끼리 point to point 통신하기 (도커마다 네임스페이스를 만들어서 VETH 로 연결)

https://docs.docker.com/v1.7/articles/networking/#building-your-own-bridge


$ docker run -i -t --rm --net=none base /bin/bash

root@1f1f4c1f931a:/#


$ docker run -i -t --rm --net=none base /bin/bash

root@12e343489d2f:/#


# Learn the container process IDs

# and create their namespace entries


$ docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a

2989

$ docker inspect -f '{{.State.Pid}}' 12e343489d2f

3004

$ sudo mkdir -p /var/run/netns

$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989

$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004


# Create the "peer" interfaces and hand them out


$ sudo ip link add A type veth peer name B


$ sudo ip link set A netns 2989

$ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A

$ sudo ip netns exec 2989 ip link set A up

$ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A


$ sudo ip link set B netns 3004

$ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B

$ sudo ip netns exec 3004 ip link set B up

$ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B



# ssh 다른 샘플

FROM ubuntu:14.04

RUN echo "deb http://archive.ubuntu.com/ubuntu/ trusty main universe" > /etc/apt/sources.list

RUN apt-get update


RUN apt-get install -y openssh-server

RUN mkdir /var/run/sshd

RUN echo 'root:screencast' | chpasswd


EXPOSE 22

CMD /usr/sbin/sshd -D



# NodeJS 샘플

git clone https://github.com/spkane/docker-node-hello.git

cd docker-node-hello


$ brew install tree

tree -a -I .git             # Directory 를 tree 구조로 봄


docker build --no-cache -t example/docker-node-hello:latest .

$ docker run -d -p 8081:8080 example/docker-node-hello:latest    # host 8081, docker 8080


$ echo $DOCKER_HOST


$ docker stop DOCKER_ID


# -e 옵션으로 env 넘기기

$ docker run -d -p 8081:8080 -e WHO="Seungkyu Ahn" example/docker-node-hello:latest


$ docker inspect DOCKER_ID









반응형
Posted by seungkyua@gmail.com
,

AngularJS 설치

프로그래밍 2015. 7. 26. 21:53
반응형

1. Mac 에서 brew 를 통한 nodejs, npm 설치

http://brew.sh

$ brew install npm                   # npm 을 설치하면 이펜던시에 의해서 nodejs 도 설치됨


2. Ubuntu 에서 nodejs 설치

http:nodejs.org

$ sudo apt-get install g++


$ ./configure

$ make

$ sudo make install


3. npm 업그레이드

$ sudo npm install -g npm           # /usr/local/lib/node_modules/npm


4. bower 설치  (패키지 매니저 for Web) 및 활용

http://bower.io

$ sudo npm install -g bower


$ bower install jquery                                          # registered package

$ bower install desandro/masonry                         # GitHub shorthand
$ bower install git://github.com/user/package.git   # Git endpoint

$ bower install http://example.com/script.js           # URL


$ bower install angular         # 하위 디렉토리 bower_components/angular 만들고 다운로드


$ vi .bowerrc

{

  "directory": "WebContent/bower"

}

$ bower install angular         # 하위 디렉토리 WebContent/bower/angular 만들고 다운로드


5. eclipse 로 프로젝트 생성 (Project Type : Dynamic Web Project)





















반응형
Posted by seungkyua@gmail.com
,
반응형

0. 서버 설정

Master : 192.168.75.129  (etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler)

Node   : 192.168.75.130  (flannel, kube-proxy, kubelet)


gpasswd -a stack sudo  (? 안되는데??)


0. Kubernetes 소스 다운로드 및 WebStorm 지정

# 소스 다운로드

Go 설치 및 패스 (http://ahnseungkyu.com/204)

$ cd ~/Documents/go_workspace/src

$ go get k8s.io/kubernetes


$ cd k8s.io/kubernetes

$ git checkout -b v1.1.2 tags/v1.1.2


# WebStorm  New Project 로 Go 프로젝트 생성

경로 : ~/Documents/go_workspace/src/k8s.io/kubernetes


# WebStorm >> Preferences >> Languages & Frameworks >> Go >> Go SDK 에 추가

Path : /usr/local/go


# WebStorm >> Preferences >> Languages & Frameworks >> Go >> Go Libraries >> Project libraries 에 아래 경로 추가

경로 : Documents/go_workspace/src/k8s.io/kubernetes/Godeps/_workspace



[ Master Minion 서버에 모두 설치 ]

1. apt-get 으로 필요 s/w 설치

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

deb https://apt.dockerproject.org/repo ubuntu-wily main


# Ubuntu Xenial (16.04)

deb https://apt.dockerproject.org/repo ubuntu-xenial main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

$ sudo apt-get install curl

$ sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo systemctl start docker.service



2. go apt-get 설치

$ sudo apt-get install linux-libc-dev golang gcc

$ sudo apt-get install ansible



3. host 파일 등록 (모든 서버에, root 계정으로 수행)

echo "192.168.75.129 kube-master

192.168.75.130 kube-node01" >> /etc/hosts



[ Kubernetes Master 설치 ]


4. etcd 설치

https://github.com/coreos/etcd/releases

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.2/etcd-v2.2.2-linux-amd64.tar.gz -o etcd-v2.2.2-linux-amd64.tar.gz

$ tar xzvf etcd-v2.2.2-linux-amd64.tar.gz

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcd /usr/bin

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcdctl /usr/bin


$ sudo mkdir -p /var/lib/etcd/member

$ sudo chmod -R 777 /var/lib/etcd


$ sudo vi /etc/network-environment

# The master's IPv4 address - reachable by the kubernetes nodes.

NODE_NAME=kube-master

MASTER_NAME=kube-master

NODE_NAME_01=kube-node01


sudo vi /lib/systemd/system/etcd.service

[Unit]

Description=etcd

After=network-online.service


[Service]

EnvironmentFile=/etc/network-environment          # 혹은 /etc/default/etcd.conf

PermissionsStartOnly=true

ExecStart=/usr/bin/etcd \

--name ${NODE_NAME} \

--data-dir /var/lib/etcd \

--initial-advertise-peer-urls http://192.168.75.129:2380 \

--listen-peer-urls http://192.168.75.129:2380 \

--listen-client-urls http://192.168.75.129:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.75.129:2379 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster ${MASTER_NAME}=http://kube-master:2380,${NODE_NAME_01}=http://kube-node01:2380 \

--initial-cluster-state new

Restart=always

RestartSec=10s


[Install]

WantedBy=multi-user.target

Alias=etcd.service


$ cd /lib/systemd/system

$ sudo chmod 775 etcd.service


$ sudo systemctl enable etcd.service

sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl start etcd.service



$ etcdctl set /coreos.com/network/config "{\"Network\":\"172.16.0.0/16\"}"

$ etcdctl set /coreos.com/network/subnets/172.16.10.0-24 "{\"PublicIP\":\"192.168.75.129\"}"

$ etcdctl set /coreos.com/network/subnets/172.16.93.0-24 "{\"PublicIP\":\"192.168.75.130\"}"


$ etcdctl ls /                          # etcdctl ls --recursive (전체 다 보임)

/coreos.com/network/config

/coreos.com/network/subnets/172.16.10.0-24

/coreos.com/network/subnets/172.16.93.0-24

/registry


$ etcdctl get /coreos.com/network/config

{"Network":"172.16.0.0/16"}


$ etcdctl get /coreos.com/network/subnets/172.16.10.0-24     # Master의 flannel0 bridge ip

{"PublicIP":"192.168.75.129"}


$ etcdctl get /coreos.com/network/subnets/172.16.93.0-24     # Node01의 flannel0 bridge ip

{"PublicIP":"192.168.75.130"}



5. flannel 설치
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.4 tags/v0.5.4     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


$ sudo netstat -tulpn | grep etcd          # etcd 떠 있는 포트를 확인

sudo flanneld -etcd-endpoints=http://kube-master:4001 -v=0


$ cd /lib/systemd/system

$ sudo vi flanneld.service


[Unit]

Description=flanneld Service

After=etcd.service

Requires=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

PermissionsStartOnly=true

User=root

ExecStart=/usr/bin/flanneld \

-etcd-endpoints http://localhost:4001,http://localhost:2379 \

-v=0

Restart=always

RestartSec=10s

RemainAfterExit=yes


[Install]

WantedBy=multi-user.target

Alias=flanneld.service



$ sudo systemctl enable flanneld.service

$ sudo systemctl start flanneld.service



6. Kubernetes API Server 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.1 origin/release-1.1

$ sudo make release


$ cd _output/release-tars

$ sudo tar zxvf kubernetes-server-linux-amd64.tar.gz


$ cd ~

git clone https://github.com/kubernetes/contrib.git

$ sudo cp -R ~/downloads/kubernetes/_output/* ~/downloads/contrib/ansible/roles/

$ cd ~/downloads/contrib/ansible/roles

$ sudo chown stack.stack -R *

$ vi  ~/downloads/contrib/ansible/inventory

[masters]

kube-master


[etcd]

kube-master


[nodes]

kube-node01



$ sudo su -

# ssh-keygen

# for node in kube-master kube-node01; do

ssh-copy-id ${node}

done

# exit


$ vi ~/downloads/contrib/ansible/group_vars/all.yml

source_type: localBuild

cluster_name: cluster.local

ansible_ssh_user: root

kube_service_addresses: 10.254.0.0/16

networking: flannel

flannel_subnet: 172.16.0.0

flannel_prefix: 12

flannel_host_prefix: 24

cluster_logging: true

cluster_monitoring: true

kube-ui: true

dns_setup: true

dns_replicas: 1


$ cd ~/downloads/contrib/ansible

$ ./setup.sh








sudo cp kubernetes/server/bin/kube-apiserver /usr/bin

$ sudo cp kubernetes/server/bin/kube-controller-manager /usr/bin

$ sudo cp kubernetes/server/bin/kube-scheduler /usr/bin

sudo cp kubernetes/server/bin/kubectl /usr/bin

sudo cp kubernetes/server/bin/kubernetes /usr/bin


sudo mkdir -p /var/log/kubernetes

$ sudo chown -R stack.docker /var/log/kubernetes/


$ cd /lib/systemd/system

$ sudo vi kube-apiserver.service


[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

ExecStart=/usr/bin/kube-apiserver \

--api-rate=10 \

--bind-address=0.0.0.0 \

--etcd_servers=http://127.0.0.1:4001 \

--portal_net=10.254.0.0/16 \                              # 어디서 쓰는 거지?

--insecure-bind-address=0.0.0.0 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--kubelet_port=10250 \

--service_account_key_file=/tmp/kube-serviceaccount.key \

--service_account_lookup=false \

--service-cluster-ip-range=172.16.0.0/16            # flannel 과 연동해야 하나?

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-apiserver.service


$ sudo systemctl enable kube-apiserver.service

$ sudo systemctl start kube-apiserver.service


sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl restart kube-apiserver


6. Kubernetes Controller Manager 설치

$ cd /lib/systemd/system

sudo vi kube-controller-manager.service


[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

ExecStart=/usr/bin/kube-controller-manager \

--address=0.0.0.0 \

--master=127.0.0.1:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true 

#--service_account_private_key_file=/tmp/kube-serviceaccount.key

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-controller-manager.service


$ sudo systemctl enable kube-controller-manager.service

$ sudo systemctl start kube-controller-manager.service


$ sudo systemctl daemon-reload

$ sudo systemctl restart kube-controller-manager


7. Kubernetes Scheduler 설치

$ cd /lib/systemd/system

sudo vi kube-scheduler.service


[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

ExecStart=/usr/bin/kube-scheduler \

--master=127.0.0.1:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-scheduler.service


sudo systemctl enable kube-scheduler.service

$ sudo systemctl start kube-scheduler.service


8. etcd 에 flannel 에서 사용할 ip range 등록  (flannel 을 node 에서 사용해야 필요함)

$ sudo etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'



[ Service Cluster IP Range ]

10.0.0.0 - 10.255.255.255 (10/8 prefix)

172.16.0.0 - 172.31.255.255 (172.16/12 prefix)

192.168.0.0 - 192.168.255.255 (192.168/16 prefix)




[ Kubernetes Minion 설치 ]


4. etcd 설치

https://github.com/coreos/etcd/releases

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.2/etcd-v2.2.2-linux-amd64.tar.gz -o etcd-v2.2.2-linux-amd64.tar.gz

$ tar xzvf etcd-v2.2.2-linux-amd64.tar.gz

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcd /usr/bin

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcdctl /usr/bin


$ sudo mkdir -p /var/lib/etcd/member

$ sudo chmod -R 777 /var/lib/etcd


$ sudo vi /etc/network-environment

# The master's IPv4 address - reachable by the kubernetes nodes.

NODE_NAME=kube-node01

MASTER_NAME=kube-master

NODE_NAME_01=kube-node01


sudo vi /lib/systemd/system/etcd.service

[Unit]

Description=etcd

After=network-online.service


[Service]

EnvironmentFile=/etc/network-environment          # 혹은 /etc/default/etcd.conf

PermissionsStartOnly=true

ExecStart=/usr/bin/etcd \

--name ${NODE_NAME} \

--data-dir /var/lib/etcd \

--initial-advertise-peer-urls http://192.168.75.130:2380 \

--listen-peer-urls http://192.168.75.130:2380 \

--listen-client-urls http://192.168.75.130:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.75.130:2379 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster ${MASTER_NAME}=http://kube-master:2380,${NODE_NAME_01}=http://kube-node01:2380 \

--initial-cluster-state new

Restart=always

RestartSec=10s


[Install]

WantedBy=multi-user.target

Alias=etcd.service


$ cd /lib/systemd/system

$ sudo chmod 775 etcd.service


$ sudo systemctl enable etcd.service

sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl start etcd.service


$ etcdctl member list


5. flannel 설치
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.5 tags/v0.5.5     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


$ sudo netstat -tulpn | grep etcd          # etcd 떠 있는 포트를 확인

sudo flanneld -etcd-endpoints=http://kube-node01:4001,http://kube-node01:2379 -v=0


$ cd /lib/systemd/system

$ sudo vi flanneld.service


[Unit]

Description=flanneld Service

After=etcd.service

Requires=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

PermissionsStartOnly=true

User=root

ExecStart=/usr/bin/flanneld \

-etcd-endpoints http://kube-node01:4001,http://kube-node01:2379 \

-v=0

Restart=always

RestartSec=10s

RemainAfterExit=yes


[Install]

WantedBy=multi-user.target

Alias=flanneld.service



$ sudo systemctl enable flanneld.service

$ sudo systemctl start flanneld.service




8. Kubernetes Proxy 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ sudo make release


$ cd _output/release-tars

$ sudo tar xvf kubernetes-server-linux-amd64.tar.gz


sudo cp kubernetes/server/bin/kube-proxy /usr/bin

$ sudo cp kubernetes/server/bin/kubelet /usr/bin

sudo cp kubernetes/server/bin/kubectl /usr/bin

sudo cp kubernetes/server/bin/kubernetes /usr/bin


sudo mkdir -p /var/log/kubernetes

$ sudo chown -R stack.docker /var/log/kubernetes/


$ cd /lib/systemd/system

sudo vi kube-proxy.service


[Unit]

Description=Kubernetes Proxy

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

ExecStart=/usr/bin/kube-proxy \

--master=http://kube-master:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--v=0                                                     # debug 모드

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-proxy.service


$ sudo systemctl enable kube-proxy.service

$ sudo systemctl start kube-proxy.service



9. Kubernetes Kubelet 설치

$ cd /lib/systemd/system

sudo vi kubelet.service


[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

ExecStart=/usr/bin/kubelet \

--address=0.0.0.0 \

--port=10250 \

--hostname_override=kube-minion \

--api_servers=http://kube-master:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--cluster_domain=cluster.local \

--v=0                                                      # debug 모드

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kubelet.service


$ sudo systemctl enable kubelet.service

$ sudo systemctl start kubelet.service


# docker 서비스 restart

$ sudo service docker restart

10. flannel 설치 (etcd 의 Network 등 설정 값을 가지고 옴) - 동작 확인 필요
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.1 tags/v0.5.1     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


sudo flanneld -etcd-endpoints=http://kube-master:4001 -v=0



10. 설치한 node 확인

sudo kubectl get nodes


NAME                 LABELS                                                    STATUS

192.168.75.202   kubernetes.io/hostname=192.168.75.202    NotReady

kube-minion        kubernetes.io/hostname=kube-minion         Ready


11. 서비스 올리기

# Master 서버

$ sudo systemctl start etcd.service

$ sudo systemctl start kube-apiserver.service

$ sudo systemctl start kube-controller-manager.service

$ sudo systemctl start kube-scheduler.service


# Minion 서버

$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kubelet.service



12. mysql 서비스 올리기

mkdir pods

$ pods

$ vi mysql.yaml

apiVersion: v1

kind: Pod

metadata:

  name: mysql

  labels:

    name: mysql

spec:

  containers:

    - resources:

        limits :

          cpu: 1

      image: mysql

      name: mysql

      env:

        - name: MYSQL_ROOT_PASSWORD

          # change this

          value: root

      ports:

        - containerPort: 3306

          name: mysql


$ sudo kubectl create -f mysql.yaml

$ sudo kubectl get pods


$ vi mysql-service.yaml

apiVersion: v1

kind: Service

metadata:

  labels:

    name: mysql

  name: mysql

spec:

  publicIPs:

    - 192.168.75.202

  ports:

    # the port that this service should serve on

    - port: 3306

  # label keys and values that must match in order to receive traffic for this service

  selector:

    name: mysql


$ sudo kubectl create -f mysql-service.yaml

$ sudo kubectl get services







**************************************************

*****  juju 로 설치  (실패)                               ***********

**************************************************

1. juju 설치

sudo add-apt-repository ppa:juju/stable

$ sudo apt-get update

$ sudo apt-get install juju-core juju-quickstart

juju quickstart u/kubernetes/kubernetes-cluster












**************************************************

*****  여기는 참고                                          ***********

**************************************************


3. flannel 설치

$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.1 tags/v0.5.1

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ cp bin/flanneld /opt/bin




4. etcd 설치

https://github.com/coreos/etcd/releases

$ curl -L  https://github.com/coreos/etcd/releases/download/v2.1.1/etcd-v2.1.1-linux-amd64.tar.gz -o etcd-v2.1.1-linux-amd64.tar.gz

$ tar xzvf etcd-v2.1.1-linux-amd64.tar.gz

$ sudo cp  etcd-v2.1.1-linux-amd64/bin/etcd* /opt/bin

$ cd /var/lib

$ sudo mkdir etcd

$ sudo chown stack.docker etcd

sudo mkdir /var/run/kubernetes

$ sudo chown stack.docker /var/run/kubernetes

sudo vi /etc/default/etcd

ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"



3. Kubernetes Master 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ cd cluster/ubuntu/

$ ./build.sh            # binaries 디렉토리로 다운로드 함


# Add binaries to /usr/bin

$ sudo cp -f binaries/master/* /usr/bin

$ sudo cp -f binaries/kubectl /usr/bin


$ wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz

$ tar -xvf master.tar.gz

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd


$ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment

$ vi network-environment

#! /usr/bin/bash

# This node's IPv4 address

DEFAULT_IPV4=192.168.75.201


# The kubernetes master IP

KUBERNETES_MASTER=192.168.75.201


# Location of etcd cluster used by Calico.  By default, this uses the etcd

# instance running on the Kubernetes Master

ETCD_AUTHORITY=192.168.75.201:4001


# The kubernetes-apiserver location - used by the calico plugin

KUBE_API_ROOT=https://192.168.75.201:443/api/v1/


$ sudo mv -f network-environment /etc



$ sudo systemctl enable /etc/systemd/etcd.service

$ sudo systemctl enable /etc/systemd/kube-apiserver.service

$ sudo systemctl enable /etc/systemd/kube-controller-manager.service

$ sudo systemctl enable /etc/systemd/kube-scheduler.service


$ sudo systemctl start etcd.service

$ sudo systemctl start kube-apiserver.service

$ sudo systemctl start kube-controller-manager.service

$ sudo systemctl start kube-scheduler.service






4. Kubernetes Minion 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ cd cluster/ubuntu/

$ ./build.sh            # binaries 디렉토리로 다운로드 함


# Add binaries to /usr/bin

$ sudo cp -f binaries/minion/* /usr/bin


$ wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz

$ tar -xvf master.tar.gz

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd


$ sudo systemctl enable /etc/systemd/kube-proxy.service

$ sudo systemctl enable /etc/systemd/kube-kubelet.service


$ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment

$ vi network-environment

#! /usr/bin/bash

# This node's IPv4 address

DEFAULT_IPV4=192.168.75.201


# The kubernetes master IP

KUBERNETES_MASTER=192.168.75.201


# Location of etcd cluster used by Calico.  By default, this uses the etcd

# instance running on the Kubernetes Master

ETCD_AUTHORITY=192.168.75.201:4001


# The kubernetes-apiserver location - used by the calico plugin

KUBE_API_ROOT=https://192.168.75.201:443/api/v1/


$ sudo mv -f network-environment /etc



$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kube-kubelet.service












4. kubernetes 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

$ git checkout -b release-1.0 origin/release-1.0

$ sudo make release


$ cd _output/release-tars

$ sudo chown -R stack.docker *

$ tar xvf kubernetes-server-linux-amd64.tar.gz


$ sudo su -

$ echo "192.168.75.201 kube-master

192.168.75.202 kube-minion" >> /etc/hosts

$ exit





5. kubernetes Master 설치


# kube-master 에 뜨는 서비스

etcd

flanneld

kube-apiserver

kube-controller-manager

kube-scheduler


$ cd ~/kubernetes/_output/release-tars/kubernetes

$ cp server/bin/kube-apiserver /opt/bin/

$ cp server/bin/kube-controller-manager /opt/bin/

$ cp server/bin/kube-scheduler /opt/bin/

$ cp server/bin/kubectl /opt/bin/

$ cp server/bin/kubernetes /opt/bin/


$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/etcd.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-apiserver.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-controller-manager.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-scheduler.conf /etc/init/


$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/etcd /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-apiserver /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-controller-manager /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-scheduler /etc/init.d/


$ sudo vi /etc/default/kube-apiserver

KUBE_API_ADDRESS="--address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet_port=10250"

KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"

KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"

KUBE_API_ARGS=""



$ sudo vi /etc/default/kube-controller-manager

KUBELET_ADDRESSES="--machines=192.168.75.202"






6. Minion 설치


# kube-minion 에 뜨는 서비스

flanneld

kubelet

kube-proxy


cd ~/kubernetes/_output/release-tars/kubernetes

sudo cp server/bin/kubelet /opt/bin/

$ sudo cp server/bin/kube-proxy /opt/bin/

$ sudo cp server/bin/kubectl /opt/bin/

$ sudo cp server/bin/kubernetes /opt/bin/


$ sudo cp kubernetes/cluster/ubuntu/minion/init_conf/kubelet.conf /etc/init

$ sudo cp kubernetes/cluster/ubuntu/minion/init_conf/kube-proxy.conf /etc/init


$ sudo cp kubernetes/cluster/ubuntu/minion/init_scripts/kubelet /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/minion/init_scripts/kube-proxy /etc/init.d/












$ cd ~/kubernetes

$ vi cluster/ubuntu/config-default.sh

export nodes=${nodes:-"stack@192.168.75.201 stack@192.168.75.202"}

roles=${roles:-"ai i"}

export NUM_MINIONS=${NUM_MINIONS:-2}

export SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-192.168.3.0/24}

export FLANNEL_NET=${FLANNEL_NET:-172.16.0.0/16}


$ cd cluster

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh








3. go 소스 설치

https://golang.org/dl/

$ curl -L https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz -o go1.4.2.linux-amd64.tar.gz

$ tar xvf go1.4.2.linux-amd64.tar.gz

























반응형
Posted by seungkyua@gmail.com
,
반응형

1. git 설치하기

# apt-get install git-core git-review

# adduser gerrit

# mkdir -p /git_repo

# chown -R gerrit.gerrit /git_repo

# sudo mkdir -p /git_review

chown -R gerrit.gerrit /git_review

# git init --bare /git_repo/paas.git


2. gerrit 다운로드

https://gerrit-releases.storage.googleapis.com/index.html


3. mysql 설치

# mysql -uroot -p

mysql> CREATE USER 'gerrit'@'localhost' IDENTIFIED BY 'secret';

mysql> CREATE DATABASE reviewdb;

mysql> ALTER DATABASE reviewdb charset=utf8;

mysql> GRANT ALL ON reviewdb.* TO 'gerrit'@'localhost';

mysql> FLUSH PRIVILEGES;



4. apache2 설치

$ sudo apt-get install apache2 apache2-utils libapache2-mod-proxy-html libxml2-dev

$ sudo a2enmod proxy_http

$ sudo a2enmod proxy

$ sudo service apache2 restart


# sudo vi /etc/apache2/sites-available/gerrit.conf

<VirtualHost *:8080>

  ServerName localhost

  ProxyRequests Off

  ProxyVia Off

  ProxyPreserveHost On


  <Proxy *>

    Order deny,allow

    Allow from all

  </Proxy>


  <Location /login/>

    AuthType Basic

    AuthName "Gerrit Code Review"

    Require valid-user

    AuthUserFile /git_review/etc/passwords

  </Location>


  AllowEncodedSlashes On

  ProxyPass / http://127.0.0.1:8081/

  ProxyPassReverse / http://127.0.0.1:8081/                #외부 SSO 검증에 기반한 HTTP 인증

#  RequestHeader set REMOTE-USER %{REMOTE_USER} #외부 SSO 검증에 기반한 HTTP 인증

</VirtualHost>


$ cd /etc/apache2/sites-available

$ sudo a2ensite gerrit.conf

$ sudo vi /etc/apache2/ports.conf

Listen 8080


$ sudo service apache2 restart




5. gerrit site 설치

# apt-get install openjdk-7-jdk


# oracle java 를 설치하는 방법

# add-apt-repository ppa:webupd8team/java

# apt-get udpate

# apt-get install oracle-java7-installer



# su - gerrit

$ cd /git_review

$ cp /home/stack/Downloads/gerrit-2.11.3.war .

$ java -jar gerrit-2.11.3.war init -d /git_review

 *** Git Repositories

*** 


Location of Git repositories   [git]: /git_repo


*** SQL Database

*** 


Database server type           [h2]: mysql


Gerrit Code Review is not shipped with MySQL Connector/J 5.1.21

**  This library is required for your configuration. **

Download and install it now [Y/n]?

Downloading http://repo2.maven.org/maven2/mysql/mysql-connector-java/5.1.21/mysql-connector-java-5.1.21.jar ... OK

Checksum mysql-connector-java-5.1.21.jar OK

Server hostname                [localhost]: 

Server port                    [(mysql default)]: 

Database name                  [reviewdb]: 

Database username              [gerrit]:

gerrit2's password            : secret


*** Index

*** 


Type                           [LUCENE/?]: 


The index must be rebuilt before starting Gerrit:

  java -jar gerrit.war reindex -d site_path


*** User Authentication

*** 


Authentication method          [OPENID/?]: http

# Get username from custom HTTP header [y/N]? y                    # 외부 SSO HTTP 인증시

# Username HTTP Header [SM_USER]: REMOTE_USER_RETURN    # 외부 SSO HTTP 인증시

SSO logout URL  : http://aa:aa@192.168.75.141:8080/


*** Review Labels

*** 


Install Verified label         [y/N]? 


*** Email Delivery

*** 


SMTP server hostname       [localhost]: smtp.gmail.com

SMTP server port               [(default)]: 465

SMTP encryption                [NONE/?]: SSL

SMTP username                 [gerrit]: skanddh@gmail.com


*** Container Process

*** 


Run as                         [gerrit]: 

Java runtime                   [/usr/local/jdk1.8.0_31/jre]: 

Copy gerrit-2.11.3.war to /git_review/bin/gerrit.war [Y/n]? 

Copying gerrit-2.11.3.war to /git_review/bin/gerrit.war


*** SSH Daemon

*** 


Listen on address              [*]: 

Listen on port                 [29418]: 


Gerrit Code Review is not shipped with Bouncy Castle Crypto SSL v151

  If available, Gerrit can take advantage of features

  in the library, but will also function without it.

Download and install it now [Y/n]? N


*** HTTP Daemon

*** 


Behind reverse proxy           [y/N]? y

Proxy uses SSL (https://)      [y/N]? 

Subdirectory on proxy server   [/]: 

Listen on address              [*]: 127.0.0.1        # reverse 이기 때문에

Listen on port                 [8081]: 

Canonical URL                  [http://127.0.0.1/]:


java -jar bin/gerrit.war reindex -d /git_review


htpasswd -c /git_review/etc/passwords skanddh

# service apache2 restart



6. start/stop Daemon

$ /git_review/bin/gerrit.sh restart

$ /git_review/bin/gerrit.sh start

$ /git_review/bin/gerrit.sh stop


$ sudo ln -snf /git_review/bin/gerrit.sh /etc/init.d/gerrit.sh

$ sudo ln -snf /etc/init.d/gerrit.sh /etc/rc3.d/S90gerrit



[ HTTPS 활성화 ]

$ vi gerrit.conf

[httpd]

         listenUrl = proxy-https://127.0.0.1:8081/


$ vi /etc/httpd/conf/httpd.conf

LoadModule ssl_module modules/mod_ssl.so

LoadModule mod_proxy modules/mod_proxy.so

<VirtualHost _default_:443>

SSLEngine on

SSLProtocol all -SSLv2

SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW

SSLCertificateFile /etc/pki/tls/certs/server.crt

SSLCertifacteKeyFile /etc/pki/tls/private/server.key

SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt

ProxyPass / http://127.0.0.1:8081/

ProxyPassReverse / http://127.0.0.1:8081/

</VirtualHost>


인증서 생성

$ sudo mkdir -p /etc/pki/tls/private

$ sudo mkdir -p /etc/pki/tls/certs

$ sudo openssl req -x509 -days 3650 \

-nodes -newkey rsa:2048 \

-keyout /etc/pki/tls/private/server.key -keyform pem \

-out /etc/pki/tls/certs/server.crt -outform pem



-----

Country Name (2 letter code) [AU]:KO

State or Province Name (full name) [Some-State]:Seoul

Locality Name (eg, city) []:Seoul

Organization Name (eg, company) [Internet Widgits Pty Ltd]:MyCompany

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:myhost.mycompany.com

Email Address []:admin@myhost.mycompany.com

$ cd /etc/pki/tls/certs

sudo cp server.crt server-chain.crt



user.email 과 user.name 등록

$ git config user.name "Seungkyu Ahn"

$ git config user.email "skanddh@gmail.com"


password 등록

git config credential.helper cache                             # default 15분 저장

$ git config credential.helper 'cache --timeout=3600'      # 1시간 저장


커밋 메세지 hook 설치

curl -Lo .git/hooks/commit-msg http://localhost:8080/tools/hooks/commit-msg

$ chmod +x .git/hooks/commit-msg


review (gerrit remote url 등록)

git remote add gerrit http://localhost:8080/hello-project


# 서버 프로젝트에 미리 등록해서 clone 시 다운받을 수 있도록 함

$ vi .gitreview


[gerrit]

host=localhost

port=8080

project=hello-project

defaultbranch=master


$ git checkout -b bug/1

수정1

$ git add

$ git commit

$ git review

수정2

$ git add

$ git commit --amend

$ git review



review (직접하는 방법)

$ git checkout -b bug/1

수정1

$ git add

$ git commit

git push origin HEAD:refs/for/master%topic=bug/1



[ Jenkins 설치 ]

jenkins tomcat 의 webapps 디렉토리에 다운로드

# adduser jenkins

# chown -R jenkins.jenkins apache-tomcat-8.0.26

# su - jenkins


http://jenkins-ci.org/

$ cd /usr/local/apache-tomcat-8.0.26/webapps

$ wget http://updates.jenkins-ci.org/download/war/1.580.1/jenkins.war

wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war                       # 최신 버전


tomcat 포트 및 URIEndoing 변경

$ vi /usr/local/apache-tomcat-8.0.26/conf/server.xml


<Connector port="7070" protocol="HTTP/1.1"

           connectionTimeout="20000"

           redirectPort="8443"

           URIEncoding="UTF-8" />


/usr/local/apache-tomcat-8.0.26/bin/startup.sh


jenkins 접속

http://192.168.75.141:7070/jenkins/


웹화면에서 보안 설정

(좌측메뉴) Jenkins 관리

Configure Global Security

  - Enable security

  - Security Realm : Jenkins’ own user database

  - Authorization : Matrix-based security

  - User/group to add: admin


저장 후 admin 계정으로 가입



[ Jenkins 연동 ]

1. 젠킨스 플러그인 설치

1. Jenkins Git Client plugin

2. Jenkins Git Plugin : 젠킨스와 깃을 연동

3. Jenkins Gerrit Trigger plugin : 게릿 변경시 패치 세트를 가져와 빌드하고 점수를 남김

4. Hudson Gerrit plugin : 깃 플러그인 설정을 가능


2. 게릿 트리거 플러그인

1. HTTP/S Canonical URL: 게릿의 변경 및 패치 세트를 가리키는 URL

2. SSH 접속 : 게릿에 연결하여 게릿으로부터의 이벤트를 감지


jenkins를 띄운 사용자로 ssh 키 생성 및 게릿에 젠킨스가 사용할 배치 사용자를 생성

jenkins 계정으로 jenkins 를 실행하면 아래 내부 사용자 생성이 필요없음

사용자가 다르면 게릿의 관리자 계정으로 create-account 명령을 실행해서 내부 사용자를 생성

$ skanddh 계정으로 로그인

ssh-keygen -t rsa

ssh -p 29418 skanddh@192.168.75.141


# skanddh 가 gerrit 의 관리자 계정이어야 하며 skanddh 계정으로 실행

$ sudo cat /home/jenkins/.ssh/id_rsa.pub | \

ssh -p 29418 skanddh@192.168.75.141 gerrit create-account \

--group "'Non-Interactive Users'" --full-name Jenkins \

--email jenkins@localhost.com \ --ssh-key - jenkins


All-Projects에 있는 Non-Interactive Users 그룹에 아래의 권한이 있는지 확인

1. Stream events 권한이 있으면 게릿의 변경 발생을 원격으로 감지

2. refs/* 에 Read 권한이 있으면 gerrit 저장소의 변경 사항을 읽고 clone 가능

3. refs/heads/* 에 대한 Label Code-Review(Verified) -1..+1 권한이 잇으면 변경에 대해 점수 부여 가능


게릿 트리거 플러그인 설정

jenkins url 접속

http://192.168.75.141:7070/jenkins/gerrit-trigger


1. URL 과 SSH 연결을 설정

    Name : Gerrit

    Hostname : 192.168.75.141

    Frontend URL : http://192.168.75.141:8080

    SSH Port : 29418

    Username : jenkins

    E-mail : jenkins@localhost.com

    SSH Keyfile : /home/jenkins/.ssh/id_rsa

    SSH Keyfile Password :


2. Test Connection 으로 테스트

3. 설정 페이지 맨 아래 Start/Stop 버튼으로 젠킨스 재시작


jenkins url 접속

http://192.168.75.141:7070/jenkins/gerrit_manual_trigger

Query 입력 란에 status:open 입력 -> Search 버튼 클릭

http://192.168.75.141:8080/#q/status:open,n,z 페이지에서 리뷰 대기 중인 변경 확인


게릿 트리거 설정

게릿 트리거 실행 조건을 SCM polling(또는 다른 트리거 정책)에서 Gerrit Event 로 변경

게릿 트리거 설정 부분에서 Advanced 버튼으로 게릿 조건을 지정


깃 플러그인 설정 (Hudson Gerrit plugin 을 설치해야 나옴)

깃 플러그인에서 게릿의 ref-spec 다음에 추가

Advanced 버튼 클릭 하여 깃 저장소 설정 변경

1. $GERRIT_REFSPEC 을 복제해야 할 깃 refspec 으로 지정

2. $GERRIT_PATCHSET_REVISION을 빌드할 깃 브랜치로 지정

3. 트리거 방식을 Gerrit trigger로 지정


아래 두가지 활성화

1. Wipe out workspace : 작업 공간 비우기

2. Use shallow clone : 얕은 복제 사용





반응형
Posted by seungkyua@gmail.com
,
반응형

Compute IP          : 172.23.147.187

가상 IP  NAT         :  192.168.75.0

가상 IP  Host-Only :  192.168.230.0


1. HP Helion OpenStack Community Version 다운로드

https://helion.hpwsportal.com/catalog.html#/Home/Show

# mkdir -p /root/work

# tar -xzvf HP_Helion_OpenStack_1.1.1.tgz -C /root/work



2. 설치 문서

http://docs.hpcloud.com/helion/community/install-virtual/


3. sudo 세팅

$ sudo visudo

stack   ALL=(ALL:ALL) NOPASSWD: ALL


4. root 접속 및 rsa 키 생성

$ sudo su -

ssh-keygen -t rsa


# s/w 설치

# apt-get update

# apt-get dist-upgrade

# sudo su -l -c "apt-get install -y qemu-kvm libvirt-bin openvswitch-switch openvswitch-common python-libvirt qemu-system-x86 ntpdate ntp openssh-server"


5. ntp 서버 설정

# ntpdate -u time.bora.net

# vi /etc/ntp.conf

...

#server 0.ubuntu.pool.ntp.org

#server 1.ubuntu.pool.ntp.org

#server 2.ubuntu.pool.ntp.org

#server 3.ubuntu.pool.ntp.org

server time.bora.net

...

restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap



# Use Ubuntu's ntp server as a fallback.

#server ntp.ubuntu.com

server 127.127.1.0

...


sudo /etc/init.d/ntp restart

# ntpq -p                             # ntp 상태 보기

# dpkg-reconfigure ntp         # ntp 에러 날 때




5. unpacking

# mkdir work

# cd work

tar zxvf /{full path to downloaded file from step 2}/Helion_Openstack_Community_V1.4.tar.gz



7. VM 사양 조정

vi /root/vm_plan.csv

,,,,2,4096,512,Undercloud

,,,,2,24576,512,OvercloudControl

,,,,2,8192,512,OvercloudSwiftStorage

,,,,4,16384,512,OvercloudCompute



6. start seed vm

export SEED_NTP_SERVER=192.168.122.1

export NODE_MEM=4096

HP_VM_MODE=y bash -x /root/work/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --create-seed --vm-plan /root/vm_plan.csv 2>&1|tee seedvminstall.log



7. Under Cloud, Over Cloud 생성

# seed vm 접속

ssh 192.0.2.1


# 변수 세팅

# export OVERCLOUD_CONTROLSCALE=1

export OVERCLOUD_SWIFTSTORAGESCALE=1

export OVERCLOUD_SWIFT_REPLICA_COUNT=1

export ENABLE_CENTRALIZED_LOGGING=0

export USE_TRICKLE=0

export OVERCLOUD_STACK_TIMEOUT=240

export UNDERCLOUD_STACK_TIMEOUT=240

export OVERCLOUD_NTP_SERVER=192.168.122.1

export UNDERCLOUD_NTP_SERVER=192.168.122.1

export FLOATING_START=192.0.8.140

export FLOATING_END=192.0.8.240

export FLOATING_CIDR=192.0.8.0/21

export OVERCLOUD_NEUTRON_DVR=False



# 로케일 변경

export LANGUAGE=en_US.UTF-8

export LANG=en_US.UTF-8

export LC_ALL=en_US.UTF-8



# start Under Cloud

bash -x tripleo/tripleo-incubator/scripts/hp_ced_installer.sh 2>&1|tee stackinstall.log



8. 아래 IP 확인

OVERCLOUD_IP_ADDRESS  : 192.0.2.23

UNDERCLOUD_IP_ADDRESS  : 192.0.2.2



9. 설치 확인하기

# demo, admin 유저의 패스워드 확인

cat /root/tripleo/tripleo-undercloud-passwords

cat /root/tripleo/tripleo-overcloud-passwords


10. seed vm에 접속한 후 undercloud ip 보기

# . /root/stackrc

UNDERCLOUD_IP=$(nova list | grep "undercloud" | awk ' { print $12 } ' | sed s/ctlplane=// )

echo $UNDERCLOUD_IP


11. seed vm 에서 overcloud ip 보기

. /root/tripleo/tripleo-overcloud-passwords

TE_DATAFILE=/root/tripleo/ce_env.json

. /root/tripleo/tripleo-incubator/undercloudrc

OVERCLOUD_IP=$(heat output-show overcloud KeystoneURL | cut -d: -f2 | sed s,/,,g )

# echo $OVERCLOUD_IP



[ OverCloud 내 VM 이 인터넷 연결이 될 수 있도록 수정]

0. DNS change (overcloud)

/etc/resolv.conf


1. security rule check (overcloud)



2. ip forward (host, seed, undercloud, overcloud)
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.ip_forward = 1


3. br-tun, br-int, br-ex up (host, seed, overcloud, compute)
ip link set br-tun up
ip link set br-ex up
ip link set br-int up


4. Host iptables NAT add
iptables -t nat -A POSTROUTING -s 192.0.8.0/21 ! -d 192.0.2.0/24 -j SNAT --to-source 172.23.147.187


5. Host iptables filter delete
iptables -D FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
iptables -D FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable



6. Host iptables NAT DNAT port change

# overcloud Horizon port forwarding

iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.0.2.21


# ALS port forwarding

iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.0.8.143




13. Host 에서 콘솔 접속할 수 있게 열기

# ssh 192.0.2.1 -R 443:<overcloud IP>:443 -L <laptop IP>:443:127.0.0.1:443

# ssh 192.0.2.1 -R 443:192.0.2.24:443 -L 172.23.147.187:443:127.0.0.1:443



14. connecting to the demo vm

# ssh debian@192.0.8.141



15. overcloud scheduler memory ratio 변경

# ssh heat-admin@192.0.2.23                  # overcloud-controllerMgmt

$ sudo su -

# vi /etc/nova/nova.conf

...

ram_allocation_ratio=100

...

# restart nova-scheduler

# exit


# 다른 over cloud 도 수정

# ssh heat-admin@192.0.2.27             # overcloud-controller0

# ssh heat-admin@192.0.2.28             # overcloud-controller1





16. monitoring 접속

http://<under cloud ip>/icinga           # icingaadmin / icingaadmin



17. undercloud logging 에 접속하기위해 Kibana 패스워드 알기

ssh heat-admin@<undercloud IP>

cat  /opt/kibana/htpasswd.cfg

http://<under cloud ip>:81                   # kibana / ?????




# vm 백업

# tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --save-vms


# vm Recover

# tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --resume-vms





[ HDP install ]


1. HP Development Platform Community Version 다운로드

https://helion.hpwsportal.com/catalog.html#/Home/Show


2. HDP 설치 문서

https://docs.hpcloud.com/helion/devplatform/install/community


* Host(base) 에서 설치하거나 Seed 에서 설치할 수 있음


3. Seed에서 필요 s/w 설치

# pip install cffi enum34 pyasn1 virtualenv

# scp -o StrictHostKeyChecking=no 192.0.2.21:/usr/local/share/ca-certificates/ephemeralca-cacert.crt /root/ephemeralca-cacert.crt


tar -zxvf hp_helion_devplatform_community.tar.gz

cd dev-platform-installer

# ./DevelopmentPlatform_Enable.sh \
    -u admin \
    -p bd9352ceed184839e2231d2a13062d461928b857 \     # admin-password
    -a 192.0.2.21 \                                                                           # overcloud ip
    -i c1821d8687f14fd4b74c11892f5d7af0 \                            # tenant-id
    -e /root/ephemeralca-cacert.crt \



3. Host(Base)에 필요 s/w 설치

# sudo apt-get install -y python-dev libffi-dev libssl-dev python-virtualenv python-pip

# mkdir -p hdp_work

# cd hdp_work

tar -zxvf /home/stack/Downloads/HDP/hp_helion_devplatform_community.tar.gz

cd dev-platform-installer

./DevelopmentPlatform_Setup.sh -p {admin_user_password} -a {auth_keystone_ip_address}

./DevelopmentPlatform_Setup.sh -p 2c0ee7b859261caf96a3069c60f516de1e3682c9 -a 192.0.2.21


혹은 아래와 같이 -n (username) -t (tenant name) 을 지정

# ./DevelopmentPlatform_Setup.sh -r regionOne -n admin -p 2c0ee7b859261caf96a3069c60f516de1e3682c9 -t admin -a '192.0.2.21'

# admin password 를 모를 경우 다음과 같이 실행

# cat /root/tripleo/tripleo-overcloud-passwords


# Keystone ip 를 모를 경우 다음과 같이 실행

# . /root/tripleo/tripleo-overcloud-passwords

# TE_DATAFILE=/root/tripleo/ce_env.json . /root/tripleo/tripleo-incubator/undercloudrc

# heat output-show overcloud KeystoneURL




5. cluster 설정을 위한 client tool 다운로드

http://docs.hpcloud.com/helion/devplatform/1.2/ALS-developer-trial-quick-start/2

cf-mgmt 와 ALS Client 다운로드

# host 에서 파일을 seed로 복사

$ unzip *.zip

$ scp helion-1.2.0.1-linux-glibc2.3-x86_64/helion root@192.0.2.1:client

$ scp linux-amd64/cf-mgmt root@192.0.2.1:client


# seed에서 수행

6. Create Cluster

$ vi ~/.profile

export PATH=$PATH:/root/client/cf-mgmt:/root/client/helion:.


$ cf-mgmt update









===========================   참고 ======================



5. VM을 위한 DNS 세팅

vi tripleo/hp_passthrough/overcloud_neutron_dhcp_agent.json

{"option":"dhcp_delete_namespaces","value":"True"},

{"option":"dnsmasq_dns_servers","value":"203.236.1.12,203.236.20.11"}


vi tripleo/hp_passthrough/undercloud_neutron_dhcp_agent.json

{"option":"dhcp_delete_namespaces","value":"True"},

{"option":"dnsmasq_dns_servers","value":"203.236.1.12,203.236.20.11"}



6. VM root disk 위치 수정

# mkdir -p /data/libvirt/images           # vm qcow2 이미지를 생성할 디렉토리 미리 생성

# vi /root/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh

...

IMAGES_DIR=${IMAGES_DIR:-"/data/libvirt/images"}    # 127 라인 디렉토리 변경

...


# virsh pool-dumpxml default > pool.xml


# vi pool.xml

<pool type='dir'>

  <name>default</name>

  <uuid>9690731d-e0d1-49d1-88a4-b25bccc78418</uuid>

  <capacity unit='bytes'>436400848896</capacity>

  <allocation unit='bytes'>2789785694208</allocation>

  <available unit='bytes'>18446741720324706304</available>

  <source>

  </source>

  <target>

    <path>/data/libvirt/images</path>

    <permissions>

      <mode>0711</mode>

      <owner>-1</owner>

      <group>-1</group>

    </permissions>

  </target>

</pool>


# virsh pool-destroy default

# virsh pool-create pool.xml



8. 아래 파일의 해당 라인의 IP 변경 : 192.0.8.0 -> 192.10.8.0,      192.0.15.0 -> 192.10.15.0

./tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh:800

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:70

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:71

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:72

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:181

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:182

./tripleo/tripleo-incubator/scripts/hp_ced_setup_net.sh:183



# undercloud, overcloud 설치 시 변수 셋팅

# export OVERCLOUD_NEUTRON_DVR=False

# export OVERCLOUD_CINDER_LVMLOOPDEVSIZE=500000      # 필요시 필요한 양만큼


# seed locale 변경

# locale-gen en_US.UTF-8

# sudo dpkg-reconfigure locales          # 필요시



# 변수 세팅  (이건 Comm 버전에서 에러 날 때)

# vi ./tripleo/tripleo-incubator/scripts/hp_ced_setup_cloud_env.sh

...

export OVERCLOUD_CONTROLSCALE=${OVERCLOUD_CONTROLSCALE:-2}    40 라인 변경

...


13. vm dns 를 초반에 설정 못했을 때 변경하기

. /root/tripleo/tripleo-overcloud-passwords

TE_DATAFILE=/root/tripleo/ce_env.json

. /root/tripleo/tripleo-incubator/undercloudrc

# neutron subnet-list

neutron subnet-update --dns-nameserver 203.236.1.12 --dns-nameserver 203.236.20.11 c4316d44-e2ae-43fb-b462-40fa767bd9fb















반응형
Posted by seungkyua@gmail.com
,
반응형

1. tomcat 8.0 다운로드

http://tomcat.apache.org/download-80.cgi


2. 설치

$ sudo mkdir -p /usr/local

$ sudo mv ~/Downloads/apache-tomcat-8.0.23 /usr/local


3. symbolic link 삭제, 생성

sudo rm -f /Library/Tomcat

$ sudo ln -s /usr/local/apache-tomcat-8.0.23 /Library/Tomcat


4. 실행 세팅

$ sudo chown -R stephen /Library/Tomcat

$ sudo chmod +x /Library/Tomcat/bin/*.sh


5. start / stop

$ /Library/Tomcat/bin/startup.sh

$ /Library/Tomcat/bin/shutdown.sh


6. tomcat controller 다운로드

http://www.activata.co.uk/downloads/
















반응형
Posted by seungkyua@gmail.com
,
반응형


[ devstack 사전 조건 ]
devstack 설치 시 충분한 cpu, memory, disk 가 있어야 함.


[ localrc ]

VOLUME_BACKING_FILE_SIZE=70000M


[ rabbitmq memory leak 해결 ]
$ sudo vi /etc/rabbitmq/rabbitmq-env.conf
#celery_ignore_result = true

$ sudo service rabbitmq-server restart


[ cpu, memory overcommit ]
$ vi /etc/nova/nova.conf

scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 100.0
disk_allocation_ratio = 100.0



[ vm 에 할당되는 DNS 변경 ]
$ neutron subnet-list
$ neutron subnet-update <subnet> --dns_nameservers list=true 8.8.8.8 8.8.4.4



[ Ubuntu Server 14.04 Image Upload ]
이름 : Ubuntu Server 14.04 64-bit
경로 : http://uec-images.ubuntu.com/releases/14.04.2/14.04.2/ubuntu-14.04-server-cloudimg-amd64-disk1.img
포맷 : QCOW2 - QEMU Emulator
최소 디스크 : 5
최소 RAM : 1024

아래 사이트 이미지 참고
https://help.ubuntu.com/community/UEC/Images
http://uec-images.ubuntu.com/releases/



[ OpenStack API 서버 접근 가능한지 테스트 ]

[ ruby 설치 ]
$ sudo apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev
$ sudo apt-get install liblzma-dev zlib1g-dev
$ ruby -v
$ nokogiri --version

$ sudo gem install fog
$ vi .fog

:openstack:
  :openstack_auth_url:  http://192.168.230.141:5000/v2.0/tokens
  :openstack_api_key:   패스워드
  :openstack_username:  admin
  :openstack_tenant:    demo
  :openstack_region:    RegionOne # Optional

$ fog openstack
>>Compute[:openstack].servers



[ OpenStack Metadata 서버 접속 확인 ]
curl http://169.254.169.254

[ fog 로 user_data 넣기 ]
$ fog openstack
>> s = Compute[:openstack].servers.create(name: 'test', flavor_ref: , image_ref: , personality: [{'path' => 'user_data.json', 'contents' => 'test' }])



[ OpenStack API call 제한 여부 확인 ]
$ fog openstack
>> 100.times { p Compute[:openstack].servers }


[ 큰 Volume 생성 테스트 ] 
1. 30G Volume 생성
2. instance 에 Volume attach

[ Volume Attach가 안되면 tgtd 이 떠 있는지 확인 ]
sudo netstat -tulpn | grep 3260
sudo service tgt start

3. 추가 볼륨 포맷
$ sudo fdisk -l
$ sudo fdisk /dev/vdb

Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048): ENTER
Last sector, +sectors or +size{K,M,G} (2048-62914559, default 62914559): ENTER
Command (m for help): t
Partition number (1-4, default 1): 1
Hex code (type L to list codes): 83
Command (m for help): w

sudo mkfs.ext3 /dev/vdb1
$ sudo mkdir /disk
$ sudo mount -t ext3 /dev/vdb1 /disk
$ cd /disk
$ sudo touch pla





MicroBOSH 설치를 위한 OpenStack 설정 ] 
$ mkdir ~/my-micro-deployment
$ cd my-micro-deployment

[ Nova Client 준비 ]
$ sudo apt-get install python-novaclient
$ unset OS_SERVICE_TOKEN
$ unset OS_SERVICE_ENDPOINT
$ vi adminrc
export OS_USERNAME=admin
export OS_PASSWORD=imsi00
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://192.168.230.141:35357/v2.0

1. keypair 생성 : microbosh
nova keypair-add microbosh >> microbosh.pem
$ chmod 600 microbosh.pem

2. Security Group 생성 : bosh
name : bosh
description : BOSH Security Group

3. Security Rule 입력
Direction Ether Type IP Protocol Port Range Remote
Ingress  IPv4          TCP                1-65535     bosh
Ingress  IPv4          TCP                 25777     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP         25555     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP         25250     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP                 6868     0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP              4222     0.0.0.0/0 (CIDR)
Ingress  IPv4          UDP      68                0.0.0.0/0 (CIDR)
Ingress  IPv4          TCP               53         0.0.0.0/0 (CIDR)
Ingress  IPv4          UDP         53         0.0.0.0/0 (CIDR)
Egress  IPv4          Any             -             0.0.0.0/0 (CIDR)
Egress  IPv6          Any         -             ::/0 (CIDR)


4. Allocate Floating IP



MicroBOSH 설치 ] 
1. yml 작성

$ vi manifest.yml

name: microbosh

network:
  type: manual
  vip: 192.168.75.206       # Replace with a floating IP address
  ip: 10.0.0.15    # subnet IP address allocation pool of OpenStack internal network
  cloud_properties:
    net_id: a34928c6-9715-4a91-911e-a6822afd600b # internal network UUID

resources:
  persistent_disk: 20000
  cloud_properties:
    instance_type: m1.medium

cloud:
  plugin: openstack
  properties:
    openstack:
      auth_url: http://192.168.230.141:35357/v2.0   # Identity API endpoint
      tenant: demo          # Replace with OpenStack tenant name
      username: admin    # Replace with OpenStack username
      api_key: 패스워드      # Replace with your OpenStack password
      default_key_name: microbosh   # OpenStack Keypair name
      private_key: microbosh.pem     # Path to OpenStack Keypair private key
      default_security_groups: [bosh]

apply_spec:
  properties:
    director: {max_threads: 3}
    hm: {resurrector_enabled: true}
    ntp: [time.bora.net, 0.north-america.pool.ntp.org, 1.north-america.pool.ntp.org]


2. Bosh cli 설치
$ sudo gem install bosh_cli --no-ri --no-rdoc
$ sudo gem install bosh_cli_plugin_micro --no-ri --no-rdoc


3. Download stemcell
https://bosh.io/stemcells

[ Ubuntu Server 14.04 stemcell 다운로드 ]
https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

$ curl -k -L -J -O https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

혹은,

$ wget --no-check-certificate --content-disposition https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=2986

4. MicroBosh Deploy
$ bosh micro deployment manifest.yml
bosh micro deploy bosh-stemcell-2986-openstack-kvm-ubuntu-trusty-go_agent.tgz


5. MicroBosh Undeploy
bosh micro delete


6. MicroBosh Redeploy
bosh micro deploy --update bosh-stemcell-2986-openstack-kvm-ubuntu-trusty-go_agent.tgz





[ MicroBosh VM에 Cloud Foundry 설치 (Waden기반) ]
1. vm 접속
    - id : vcap / c1oudc0w
    - sudo su -


2. Ruby 설치 & Bosh cli 설치
$ apt-get update
$ apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev
$ apt-get install liblzma-dev zlib1g-dev

gem install bosh_cli --no-ri --no-rdoc -r
gem install bosh_cli_plugin_micro --no-ri --no-rdoc -r


* gem upgrade 방법
wget http://production.cf.rubygems.org/rubygems/rubygems-2.4.8.tgz
tar xvfz rubygems-2.4.8.tgz
$ cd rubygems-2.4.8
ruby setup.rb

* gem 리모트 소스 추가
gem sources --add http://rubygems.org/


3. go 설치
$ mkdir -p CloudFoundry
$ cd CloudFoundry
wget --no-check-certificate https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz
$ tar -C /usr/local -xzf go1.4.2.linux-amd64.tar.gz
$ mkdir -p /usr/local/gopath

$ vi ~/.profile
export GOPATH=/usr/local/gopath
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

$ . ~/.profile
$ apt-get install git
go get github.com/cloudfoundry-incubator/spiff


4. Cloud Foundry 소스 다운로드
git clone https://github.com/cloudfoundry/cf-release.git
cd cf-release
$ ./update


5. Cloud Foundry Manual 설치
bosh target 192.168.75.206
   admin / admin


* /tmp 디렉토리 용량 확장
$ 추가 디스크 증설
mkfs.ext3 /dev/vdc
$ mkdir -p /tmp2
mount -t ext3 /dev/vdc /tmp2
$ mount --bind /tmp2 /tmp
$ chown root.root /tmp
$ chmod 1777 /tmp

* mount 취소
$ umount /tmp


bosh upload release releases/cf-212.yml


cp spec/fixtures/openstack/cf-stub.yml .





[ Bosh lite 설치 (Waden기반) on Mac ]
1. install vagrant
http://www.vagrantup.com/downloads.html


2. bosh lite 다운로드
$ git clone https://github.com/cloudfoundry/bosh-lite
$ cd bosh-lite


3. install VirtualBox
https://www.virtualbox.org/wiki/Downloads


4. start vagrant
vagrant up --provider=virtualbox


5. Target the Bosh Director and login admin/admin
bosh target 192.168.50.4 lite
$ bosh login


6. HomeBrew 및 Spiff 설치
xcode-select --install
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew tap xoebus/homebrew-cloudfoundry
brew install spiff


7. cloud foundry 설치
git clone https://github.com/cloudfoundry/cf-release
$ cd cf-release
$ ./update


8. Single Command Deploy
$ cd ~/CloudFoundry/bosh-list
$ ./bin/provision_cf


9. route 추가
$ ./bin/add-route


10. VM Restart 후에 container restart
$ bosh cck
Choose "2" to create missing VM:
"yes"




[ simple go webapp Deploy ]

$ ssh vcap@192.168.50.4     # password : c1oudc0w

$ bosh vms

$ cf api --skip--ssl-validation https://api.192.168.0.34.xip.io           # ha-proxy ip


$ cf login

Email> admin

Password> admin


$ cf create-org test-org

$ cf target -o test-org

$ cf create-space development

$ cf target -o test-org -s development


$ sudo apt-get update

$ sudo apt-get install git

$ sudo apt-get install golang

cf update-buildpack go_buildpack

git clone https://github.com/cloudfoundry-community/simple-go-web-app.git

$ cd simple-go-web-app

### 다른 buildpack https://github.com/cloudfoundry/go-buildpack.git

$ cf push simple-go-web -b https://github.com/kr/heroku-buildpack-go.git

$ cf apps             # app list

$ cf logs simple-go-web --recent     # app deploy log


# cf login

$ cf login -a http://api.10.244.0.34.xip.io -u admin -o test-org -s development















반응형
Posted by seungkyua@gmail.com
,