haproxy 설치

Linux/Ubuntu 2016.01.09 15:15

1. hpproxy install

$ sudo apt-get install haproxy


$ sudo vi /etc/haproxy/haproxy.cfg

...

defaults

log        global

mode    http

retries   3                  # 추가

option   httplog

option   dontlognull

option   redispatch      # 추가 : 한 서버가 죽으면 다른 서버로 보내라

...

...

listen serv 0.0.0.0:80        # 추가 : serv 는 아무 이름이나 줘도 됨

mode http

option http-server-close

timeout http-keep-alive 3000             # 추가 : 이미지 같은 것은 하나의 컨넥션으로 연결하기 위해

server serv 127.0.0.1:9000 check       # server1, server2 이런 식으로 서버 이름을 준다.


$ sudo service haproxy reload


















Posted by Kubernetes Korea co-leader seungkyua@gmail.com

1. 다운로드

https://golang.org/doc/install?download=go1.5.2.darwin-amd64.tar.gz     # Mac

https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz           # Linux


$ sudo tar -C /usr/local -xzf go1.5.2.darwin-amd64.tar.gz

$ cd /usr/local

$ sudo chown -R root go


2. 환경 변수

sudo vi /etc/profile

export GOROOT=/usr/local/go                                 # go 설치 위치

export PATH=$PATH:/usr/local/go/bin                      # go 실행파일 위치


$ cd Documents

mkdir -p go_workspace{,/bin,/pkg,/src}


vi .bash_profile 

export GOPATH=$HOME/Documents/go_workspace                     # go workspace 위치

export PATH=$HOME/Documents/go_workspace/bin:$PATH         # go 실행파일 위치



## go tool 다운로드

$ got get golang.org/x/tools/cmd/...



3. go 샘플 다운로드

go get github.com/GoesToEleven/GolangTraining


# kubernetes 소스 다운로드

$ go get k8s.io/kubernetes       # 이렇게 하면 git clone https://github.com/kubernetes/kubernetes


4. Go Workspace 디렉토리 위치

- bin

- pkg

- src - github.com - GoesToEleven - GolangTraining



5. editor WebStorm 다운로드 및 세팅

https://www.jetbrains.com/webstorm/download/

버전 : WebStorm-11.0.3-custom-jdk-bundled.dmg



6. golang plugin 설치

https://plugins.jetbrains.com/plugin/5047?pr=idea

버전 : Go-0.10.749.zip


# Project Open

/Users/ahnsk/Documents/go_workspace/src/github.com/GoesToEleven/GolangTraining


# Preferences 세팅

Go SDK : /usr/local/go

Go Libraries : go_worksapce/src



7. theme 다운로드 및 설정

http://color-themes.com/?view=index

Sublime Text 2.jar 다운로드


File >> import Settings 에서 Sublime Text 2.jar 선택


# Preferences 세팅

Editor -> Colors & Fonts : Scheme을 Sublime Text2로 설정



8. JavaScript Debug 를 위한 live edit plugin 설치

https://plugins.jetbrains.com/plugin/7007?pr=pycharm

LiveEdit.jar 다운로드


# Preferences 세팅

Build, Execution, Deployment -> Debugger -> Live Edit

체크 : Highlight current....

Update Auto in (ms):   16 


# 우측 상단 돋보기 클릭하여 Edit Configuration 조회

창에서 좌측 상단 + 클릭 후 JavaScript Debug 추가


# chrom 웹 스토어에서 확장 프로그램 설치

JetBrains IDE Support



# WebStorm 단축키

파일찾기 : Command + Shift + O

단어찾기 : Command + Shift + F

실행       : Crtl + Alt + R

디버그    : Ctrl + Alt + D

줄삭제    : Command + Backspace

줄복사    : Command + D

포맷       : Command + Alt + L



# go file 규칙 테스트

$ gofmt -s -w file.go


$ git rebase -i   혹은    git push -f   로 작업의 논리적인 유닛으로 커밋



# Docker contribute 시에 DCO (Developer Certificate of Origin) 설정

# commit 마다 설정

Docker-DCO-1.1-Signed-off-by: Seungkyu Ahn <seungkyua@gmail.com> (github: seungkyua)



# 혹은 hook 를 설정

$ cd docker

$ curl -o .git/hooks/prepare-commit-msg \

https://raw.githubusercontent.com/dotcloud/docker/master/contrib/prepare-commit-msg.hook

$ chmod -x .git/hooks/prepare-commit-msg



# github user 를 세팅

$ git config -set github.user seungkyua



# Channel

# deadlock 을 막을려면 채널로 값을 보내는 쪽에서 close 채널을 해야 한다.

# 채널을 받는 쪽에서는 defer sync.WaitGroup.Done() 을 한다.

# 혹은 새로운 go 루틴을 만들고 sync.WaitGroup.Wait() 으로 끝나길 기달려서 close 채널을 한다.




# 문서 보기

## 문법 에러 검사

$ go vet wordcount.go


## tar 패키지 사용법 보기

$ go doc tar


## 로컬 문서 서버 띄우기

$ godoc -http=:6060



# godep 설치

go get github.com/tools/godep

$ cd ~/Documents/go_workspace/src/github.com/tools/godep

$ go install


## godep 을 사용하는 프로젝트로 이동

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/


## godep get 으로 Godeps/_workspace 에 패키지를 다운한다.

## _workspace 는 deprecated 예정

godep get 패키지명


 







Posted by Kubernetes Korea co-leader seungkyua@gmail.com

유용한 Site

프로그래밍 2015.12.22 13:12

1. Kubernates

    구글 논문 : https://research.google.com/pubs/pub43438.html

    구글 발표 : https://speakerdeck.com/jbeda/containers-at-scale











슬라이드 작성

http://prezi.com



















Posted by Kubernetes Korea co-leader seungkyua@gmail.com

 0. 서버 설정

Master   : 192.168.75.211  (etcd, kube-apiserver, kube-controller-manager, kube-scheduler)

Node01  : 192.168.75.212  (kube-proxy, kubelet)

Node02  : 192.168.75.213  (kube-proxy, kubelet)


etcd-2.2.1, flannel-0.5.5, k8s-1.1.2



[ Master Node 서버에 모두 설치 ]

1. apt-get 으로 필요 s/w 설치

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

#deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

#deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

#deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

#deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

#deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

#deb https://apt.dockerproject.org/repo ubuntu-wily main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo service docker restart



2. sudo 세팅

# gpasswd -a stack sudo   (이건 안되는데??)

stack   ALL=(ALL:ALL) NOPASSWD: ALL



3. ntp 설치 & ssh 키 설치

# ssh 로 master <-> Node 사이에 stack 계정으로 바로 접속할 수 있어야 함

# ssh 로 master, Node 각각 자기 서버 내에서 stack 계정에서 root 계정으로 바로 접속할 수 있어야 함



4. host 세팅

192.168.75.211    master

192.168.75.212    node01

192.168.75.213    node02



5. Go 설치

1. 다운로드

$ cd /home/stack/downloads

wget https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz

sudo tar -C /usr/local -xzf go1.5.2.linux-amd64.tar.gz


2. 환경변수 세팅

sudo vi /etc/profile

export GOROOT=/usr/local/go

export PATH=$PATH:/usr/local/go/bin


sudo visudo             # sudo 에서도 go path가 적용될려면 여기에 세팅

Defaults    env_reset

Defaults    env_keep += "GOPATH"

Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin"


$ cd

vi .bash_profile

export GOPATH=$HOME/Documents/go_workspace:$HOME/Documents/go_workspace/src/k8s.io/kubernetes/Godeps/_workspace

export PATH=$HOME/Documents/go_workspace/bin:$PATH



6. kubernetes 설치

# go 로 다운로드하기

$ go get k8s.io/kubernetes   # git clone https://github.com/kubernetes/kubernetes.git


$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes

$ git checkout -b v1.1.2 tags/v1.1.2

$ make all                                      # _output 디렉토리에 결과 파일이 생성


# 소스 수정 후 make 로 재빌드 (참고)   _output 디렉토리에 결과 파일이 생성

$ make all WHAT=plugin/cmd/kube-scheduler GOFLAGS=-v      # scheduler

$ make all WHAT=cmd/kubelet GOFLAGS=-v                           # kubelet

$ make all WHAT=cmd/kube-apiserver GOFLAGS=-v                # apiserver


# 소스 수정 후 재빌드 (참고)

$ hack/build-go.sh                  # make를 돌리면 build-go.sh 가 수행됨
$ hack/local-up-cluster.sh        # 로컬 클러스터를 생성할 때


$ sudo su -

# cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu


# export KUBE_VERSION=1.1.2

# export FLANNEL_VERSION=0.5.5

# export ETCD_VERSION=2.2.1


# ./build.sh                 # binaries 디렉토리에 다운 받음

# exit



$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu

$ vi config-default.sh


export nodes="stack@192.168.75.211 stack@192.168.75.212"

export role="a i"

export NUM_MINIONS=${NUM_MINIONS:-1}

export SERVICE_CLUSTER_IP_RANGE=192.168.230.0/24

export FLANNEL_NET=172.16.0.0/16



ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"

DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.230.10"}

DNS_DOMAIN="cluster.local"

DNS_REPLICAS=1


ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"


$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh


# 복사한 파일

make-ca-cert.sh    

reconfDocker.sh    

config-default.sh    

util.sh    

kube-scheduler.conf    

kube-apiserver.conf    

etcd.conf    

kube-controller-manager.conf    

flanneld.conf    

kube-controller-manager    

kube-scheduler    

etcd    

kube-apiserver    

flanneld    

kube-controller-manager    

etcdctl    

kube-scheduler    

etcd    

kube-apiserver    

flanneld



# kubectl 복사

$ sudo cp ubuntu/binaries/kubectl /opt/bin/.


# 경로 추가

$ vi ~/.bash_profile

export PATH=/opt/bin:$PATH

export KUBECTL_PATH=/opt/bin/kubectl



# Add-on 설치

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu

$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh


# 에러 발생하면 아래 실행 (Docker image 를 다운로드 함)

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes

./build/run.sh hack/build-cross.sh


# Add-on 설치 다시

cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cluster/ubuntu

$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh



[ Kubernetes 설치 지우기 ]

$ cd ..

$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh


# node01 에 떠 있는 docker 삭제하기

docker ps -a | awk '{print $1}' | xargs docker stop

docker ps -a | awk '{print $1}' | xargs docker rm

$ sudo cp ubuntu/binaries/kubectl /opt/bin/.                # kubectl 을 /opt/bin 에 복사해야 함


$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh



[ Master의 Docker를 flannel 로 연결 ]

sudo service docker stop

$ sudo ip link set dev docker0 down

$ sudo brctl delbr docker0

$ cat /run/flannel/subnet.env      # flannel의 subnet 과 mtu 값을 확인한다.

$ sudo vi /etc/default/docker

DOCKER_OPTS=" -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.25.1/24 --mtu=1472"


$ sudo service docker start

$ sudo ip link set dev docker0 up



# node01 에서 docker ps -a 로 가비지가 많이 쌓임. 지워주면 됨

# ssh node01 로 접속하여 가비지 조회

docker ps -a | grep Exited | awk '{print $1}'

docker ps -a | grep Exited | awk '{print $1}' | xargs docker rm


# kubernetes volume 생성되는 곳 : /var/lib/kubelet/pods

# kubernetes garbage-collection https://github.com/kubernetes/kubernetes/blob/master/docs/admin/garbage-collection.md


$ kubectl get nodes

$ kubectl get pods --namespace=kube-system         # add-on pods 확인

$ kubectl cluster-info


# Skydns Pod 정보 보기

kubectl describe pod kube-dns-v9-549av --namespace=kube-system


# DNS 확인

$ kubectl create -f busybox.yaml


$ vi busybox.yaml

apiVersion: v1

kind: Pod

metadata:

  name: busybox

  namespace: default

spec:

  containers:

  - image: busybox

    command:

      - sleep

      - "3600"

    imagePullPolicy: IfNotPresent

    name: busybox

  restartPolicy: Always


$ kubectl get pods busybox


kubectl exec Pod명 [-c Container명] -i -t -- COMMAND [args..] [flags]

$ kubectl exec busybox -- nslookup kubernetes.default


# busybox 삭제하기

$ kubectl delete -f busybox.yaml



# 웹화면 확인

http://192.168.75.211:8080/


# UI 확인

http://192.168.75.211:8080/ui    >> 아래 화면으로 리다이렉션 됨

http://192.168.75.211:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui



# Mac 에서 소스 개발하고 Master 에 소스 커밋하기  (참고)

# 원격에 tag 에도 v1.1.2 가 있고 branch 에도 v1.1.2 가 있으면 remote branch 를 지정

# git push [저장소] (local branch명:)remote branch명

git push origin refs/heads/v1.1.2


git config --global user.name "Seungkyu Ahn" 

git config --global user.email "seungkyua@gmail.com"


# 로컬 파일을 Master 서버로 복사

$ vi ~/bin/cmaster.sh

#!/bin/bash


function change_directory {

  cd /Users/ahnsk/Documents/go_workspace/src/k8s.io/kubernetes

}


change_directory

files=$(git status | grep -E 'modified|new file' | awk -F':' '{print$2}')


for file in $files; do

    scp $file stack@192.168.230.211:/home/stack/Documents/go_workspace/src/k8s.io/kubernetes/$file

done



# kube-apiserver 소스로 띄우기

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cmd/kube-apiserver


sudo -E go run apiserver.go --insecure-bind-address=0.0.0.0 --insecure-port=8080 --etcd-servers=http://127.0.0.1:4001 --logtostderr=true --service-cluster-ip-range=192.168.230.0/24 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,DenyEscalatingExec,SecurityContextDeny --service-node-port-range=30000-32767 --client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key 



# Document 만들기

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/cmd/genkubedocs

$ mkdir -p temp

$  go run gen_kube_docs.go temp kube-apiserver



7. Sample App 올려보기

https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook


# 디렉토리 위치는 kubernetes 설치한 위치

$ sudo kubectl create -f examples/guestbook/redis-master-controller.yaml

$ sudo kubectl get rc

$ sudo kubectl get pods

$ sudo kubectl describe pods/redis-master-xssrd

$ sudo kubectl logs <pod_name>          # container log 확인


$ sudo kubectl create -f examples/guestbook/redis-master-service.yaml

$ sudo kubectl get services


$ sudo kubectl create -f examples/guestbook/redis-slave-controller.yaml

$ sudo kubectl get rc

$ sudo kubectl get pods


$ sudo kubectl create -f examples/guestbook/redis-slave-service.yaml

$ sudo kubectl get services



$ sudo kubectl create -f examples/guestbook/frontend-controller.yaml

$ sudo kubectl get rc

$ sudo kubectl get pods



$ sudo kubectl create -f examples/guestbook/frontend-service.yaml

$ sudo kubectl get services





sudo kubectl describe services frontend

$ sudo kubectl get ep


# dns 보기

$ sudo kubectl get services kube-dns --namespace=kube-system


# 환경변수 보기

$ sudo kubectl get pods -o json

$ sudo kubectl get pods -o wide

$ sudo kubectl exec frontend-cyite -- printenv | grep SERVICE


8. Sample App 삭제

$ sudo kubectl stop rc -l "name in (redis-master, redis-slave, frontend)"

$ sudo kubectl delete service -l "name in (redis-master, redis-slave, frontend)"



# Network

TAP : vm과 eth0 (physical port) 와 연결할 때 사용. tap <-> bridge <-> eth0 로 됨

VETH : docker <-> bridge,  docker <-> OVS, bridge <-> OVS 를 연결할 때 사용


# interconnecting namespaces

http://www.opencloudblog.com/?p=66



# Docker <-> veth 알아내기

$ vi veth.sh


#!/bin/bash


set -o errexit

set -o nounset

#set -o pipefail


VETHS=`ifconfig -a | grep "Link encap" | sed 's/ .*//g' | grep veth`

DOCKERS=$(docker ps -a | grep Up | awk '{print $1}')


for VETH in $VETHS

do

  PEER_IFINDEX=`ethtool -S $VETH 2>/dev/null | grep peer_ifindex | sed 's/ *peer_ifindex: *//g'`

  for DOCKER in $DOCKERS

  do

    PEER_IF=`docker exec $DOCKER ip link list 2>/dev/null | grep "^$PEER_IFINDEX:" | awk '{print $2}' | sed 's/:.*//g'`

    if [ -z "$PEER_IF" ]; then

      continue

    else

      printf "%-10s is paired with %-10s on %-20s\n" $VETH $PEER_IF $DOCKER

      break

    fi

  done

done






Posted by Kubernetes Korea co-leader seungkyua@gmail.com

$ cd /opt

$ sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u20-b26/jdk-8u20-linux-x64.tar.gz"


$ sudo tar -zxvf jdk-8u20-linux-x64.tar.gz


$ sudo update-alternatives --install /usr/bin/java java /opt/jdk1.8.0_20/bin/java 2


$ sudo update-alternatives --config java


There are 2 choices for the alternative java (providing /usr/bin/java).


  Selection    Path                                            Priority   Status

------------------------------------------------------------

* 0            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      auto mode

  1            /opt/jdk1.8.0_20/bin/java                                 2         manual mode

  2            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      manual mode


Press enter to keep the current choice[*], or type selection number: 1



$ sudo update-alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_20/bin/javac 2

$ sudo update-alternatives --config javac



$ sudo update-alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_20/bin/jar 2

$ sudo update-alternatives --config jar


$ sudo vi .bashrc


export JAVA_HOME=/opt/jdk1.8.0_20

export JRE_HOME=/opt/jdk1.8.0_20/jre

export PATH=$PATH:/opt/jdk1.8.0_20/bin:/opt/jdk1.8.0_20/jre/bin


$ echo $JAVA_HOME

$ echo $JRE_HOME






Posted by Kubernetes Korea co-leader seungkyua@gmail.com

1화 Sand Hill Shuffle

2화 Runaway Devaluation

3화 Bad Money

4화 The Lady

5화 Server Space

6화 Homicide

7화 Adult Content

8화 White Hat / Black Hat

9화 Binding Arbitration

10화 Two Days of the Condor











Posted by Kubernetes Korea co-leader seungkyua@gmail.com

docker ssh + git

Container 2015.08.13 15:20

1. docker 설치하기

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

#deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

#deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

#deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

#deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

#deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

#deb https://apt.dockerproject.org/repo ubuntu-wily main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

$ sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo service docker restart


# Mac 에서 Docker 설치하기

$ ruby -e \

"$(curl -fsSL \ https://raw.githubusercontent.com/Homebrew/install/master/install)"


$ brew update

$ brew install caskroom/cask/brew-cask


$ brew cask install virtualbox

$ brew install docker

$ brew install boot2docker


$ boot2docker init

$ boot2docker up


To connect the Docker client to the Docker daemon, please set:

    export DOCKER_HOST=tcp://192.168.59.103:2376

    export DOCKER_CERT_PATH=/Users/ahnsk/.boot2docker/certs/boot2docker-vm

    export DOCKER_TLS_VERIFY=1


$ $(boot2docker shellinit)       # 환경변수 세팅


$ docker info

$ boot2docker ssh                 # vm 접속

$ boot2docker ip                   # vm ip


$ docker run --rm -ti ubuntu:latest /bin/bash        # ubuntu 이미지 테스트

$ docker run --rm -ti fedora:latest /bin/bash         # fedora 이미지 테스트

$ docker run --rm -ti centos:latest /bin/bash         # centos 이미지 테스트


# Upgrade the Boot2docker VM image

$ boot2docker stop

$ boot2docker download

$ boot2docker up


$ boot2docker delete


# Docker Hub 로그인

$ docker login


Username: seungkyua

Password: 

Email: seungkyua@gmail.com


$  cat ~/.docker/config.json


$ docker logout


# Docker Registry 를 insecure 로 변경


# boot2docker

sudo touch /var/lib/boot2docker/profile

$ sudo vi /var/lib/boot2docker/profile

EXTRA_ARGS="--insecure-registry 192.168.59.103:5000"

sudo /etc/init.d/docker restart


# Ubuntu

$ sudo vi /etc/default/docker

DOCKER_OPTS="--insecure-registry 192.168.59.103:5000"

$ sudo service docker restart


# Fedora

$ sudo vi /etc/sysconfig/docker

OPTIONS="--insecure-registry 192.168.59.103:5000"

$ sudo systemctl daemon-reload

$ sudo systemctl restart docker


# CoreOS

$ sudo cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

$ sudo vi  /etc/systemd/system/docker.service

ExecStart=/usr/lib/coreos/dockerd --daemon --host=fd:// \

$DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ \

--insecure-registry 192.168.59.103:5000

$ sudo systemctl daemon-reload

$ sudo systemctl restart docker


# Local Registry 띄우기

$ sudo mkdir -p /var/lib/registry

$ docker run -d -p 5000:5000 \

-v /var/lib/registry:/var/lib/registry \

--restart=always --name registry registry:2



# 테스트

$ docker pull ubuntu

$ docker tag ubuntu 192.168.59.103:5000/ubuntu


$ docker push 192.168.59.103:5000/ubuntu

$ docker pull 192.168.59.103:5000/ubuntu


$ docker stop registry

$ docker rm -v registry




2. docker file 만들기

# mkdir docker

# cd docker

# mkdir git-ssh

# cd git-ssh

# vi Dockerfile

FROM ubuntu:14.04


RUN apt-get -y update

RUN apt-get -y install openssh-server

RUN apt-get -y install git


# Setting openssh

RUN mkdir /var/run/sshd

RUN sed -i "s/#PasswordAuthentication yes/PasswordAuthentication no/" /etc/ssh/sshd_config


# Adding git user

RUN adduser --system git

RUN mkdir -p /home/git/.ssh


# Clearing and setting authorized ssh keys

RUN echo '' > /home/git/.ssh/authorized_keys

RUN echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTFEBrNfpSIvgz7mZ+I96/UqKFCxcouoiDDS9/XPNB1Tn7LykgvHHaR5mrPOQIJ/xTFhSVWpwsmEvTLdv3QJYLB5P+UfrjY5fUmiYgGpKKr5ym2Yua2wykHgQYdT4+lLhyq3BKbnG9vgc/FQlaCWntLckJfAYnHIGYWl1yooMAOka0/pOeJ+hPF0TxLQtrjoVJWiaHLVnB8qgPiCgvSyKROvW6cs1AhY9abasUWrQ5eNsLLMY1rDWccantMjVlcUdDZuPzI4g+/MtfE3IAs7JxtmwMvCMFRMuzWTtZkZSVyqpEGDeLnPGgMNTYUwaxQhlJLtcYnNTqdyZr8ZCcz3zP stephen@Stephenui-MacBook-Pro.local' >> /home/git/.ssh/authorized_keys


# Updating shell to bash

RUN sed -i s#/home/git:/bin/false#/home/git:/bin/bash# /etc/passwd


EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]

docker build -t git-ssh-img .

docker run --name git-ssh -d -p 1234:22 git-ssh-img


3. docker container bash로 접속

docker run -i -t --rm --net='host' ubuntu:14.04 bash


3. docker container 접속

docker exec -it <containerIdOrName> bash


4. docker 모든 컨테이너 보기

# docker ps -a


5. 모든 컨테이너 삭제

docker ps -a | awk '{print $1}' | grep -v CONTAINER | xargs sudo docker rm


6. docker 모든 <none> 이미지 삭제

docker images | grep "<none>" | awk '{print $3}' | xargs sudo docker rmi


7. 이미지 조회 및 실행

$ sudo docker search ubuntu

sudo docker run --name myssh -d -p 4444:22 rastasheep/ubuntu-sshd


8. stack 사용자 docker 그룹 권한 추가

$ sudo usermod -aG docker stack

$ sudo service docker restart

$ 재로그인


9. docker 이미지 가져오기

$ docker pull ubuntu:lates


10. docker bash쉘로 실행 및 빠져나오기기

docker run -i -t --name hello ubuntu /bin/bash

root@bb97e5f57596:/#


Ctrl + p, Ctrl + q        => 멈추지 않고 빠져나오기


$ docker attach hello            => 다시 접속하기 (enter를 한번 쳐야 함)


11. nginx 설치하기

# mkdir data


# vi Dockerfile

FROM ubuntu:14.04.3


RUN apt-get update

RUN apt-get install -y nginx

RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf

RUN chown -R www-data:www-data /var/lib/nginx


VOLUME ["/data", "/etc/nginx/site-enabled", "/var/log/nginx"]


WORKDIR /etc/nginx


CMD ["nginx"]


EXPOSE 80

EXPOSE 443


# docker build -t nginx:0.1 .

docker run --name hello-nginx -d -p 2080:80 -v /root/data:/data nginx:0.1



11. 파일 꺼내서 보기

# docker cp hello-nginx:/etc/nginx/nginx.conf ./


12. 컨테이러를 이미지로 생성

# docker commit -a "aaa <aaa@aaa.com>" -m "Initial commit" hello-nginx nginx:0.2


13. 이미지와 컨테이너 변경사항 보기

# docker diff 컨테이너ID

# docker history 이미지ID


14. 컨테이너 내부 보기

# docker inspect hello-nginx


15. docker 컨테이너의 pid 알아내기

docker inspect -f '{{.State.Pid}}' containerID


16. Docker 끼리 point to point 통신하기 (도커마다 네임스페이스를 만들어서 VETH 로 연결)

https://docs.docker.com/v1.7/articles/networking/#building-your-own-bridge


$ docker run -i -t --rm --net=none base /bin/bash

root@1f1f4c1f931a:/#


$ docker run -i -t --rm --net=none base /bin/bash

root@12e343489d2f:/#


# Learn the container process IDs

# and create their namespace entries


$ docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a

2989

$ docker inspect -f '{{.State.Pid}}' 12e343489d2f

3004

$ sudo mkdir -p /var/run/netns

$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989

$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004


# Create the "peer" interfaces and hand them out


$ sudo ip link add A type veth peer name B


$ sudo ip link set A netns 2989

$ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A

$ sudo ip netns exec 2989 ip link set A up

$ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A


$ sudo ip link set B netns 3004

$ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B

$ sudo ip netns exec 3004 ip link set B up

$ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B



# ssh 다른 샘플

FROM ubuntu:14.04

RUN echo "deb http://archive.ubuntu.com/ubuntu/ trusty main universe" > /etc/apt/sources.list

RUN apt-get update


RUN apt-get install -y openssh-server

RUN mkdir /var/run/sshd

RUN echo 'root:screencast' | chpasswd


EXPOSE 22

CMD /usr/sbin/sshd -D



# NodeJS 샘플

git clone https://github.com/spkane/docker-node-hello.git

cd docker-node-hello


$ brew install tree

tree -a -I .git             # Directory 를 tree 구조로 봄


docker build --no-cache -t example/docker-node-hello:latest .

$ docker run -d -p 8081:8080 example/docker-node-hello:latest    # host 8081, docker 8080


$ echo $DOCKER_HOST


$ docker stop DOCKER_ID


# -e 옵션으로 env 넘기기

$ docker run -d -p 8081:8080 -e WHO="Seungkyu Ahn" example/docker-node-hello:latest


$ docker inspect DOCKER_ID









Posted by Kubernetes Korea co-leader seungkyua@gmail.com

1. Mac 에서 brew 를 통한 nodejs, npm 설치

http://brew.sh

$ brew install npm                   # npm 을 설치하면 이펜던시에 의해서 nodejs 도 설치됨


2. Ubuntu 에서 nodejs 설치

http:nodejs.org

$ sudo apt-get install g++


$ ./configure

$ make

$ sudo make install


3. npm 업그레이드

$ sudo npm install -g npm           # /usr/local/lib/node_modules/npm


4. bower 설치  (패키지 매니저 for Web) 및 활용

http://bower.io

$ sudo npm install -g bower


$ bower install jquery                                          # registered package

$ bower install desandro/masonry                         # GitHub shorthand
$ bower install git://github.com/user/package.git   # Git endpoint

$ bower install http://example.com/script.js           # URL


$ bower install angular         # 하위 디렉토리 bower_components/angular 만들고 다운로드


$ vi .bowerrc

{

  "directory": "WebContent/bower"

}

$ bower install angular         # 하위 디렉토리 WebContent/bower/angular 만들고 다운로드


5. eclipse 로 프로젝트 생성 (Project Type : Dynamic Web Project)





















Posted by Kubernetes Korea co-leader seungkyua@gmail.com

0. 서버 설정

Master : 192.168.75.129  (etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler)

Node   : 192.168.75.130  (flannel, kube-proxy, kubelet)


gpasswd -a stack sudo  (? 안되는데??)


0. Kubernetes 소스 다운로드 및 WebStorm 지정

# 소스 다운로드

Go 설치 및 패스 (http://ahnseungkyu.com/204)

$ cd ~/Documents/go_workspace/src

$ go get k8s.io/kubernetes


$ cd k8s.io/kubernetes

$ git checkout -b v1.1.2 tags/v1.1.2


# WebStorm  New Project 로 Go 프로젝트 생성

경로 : ~/Documents/go_workspace/src/k8s.io/kubernetes


# WebStorm >> Preferences >> Languages & Frameworks >> Go >> Go SDK 에 추가

Path : /usr/local/go


# WebStorm >> Preferences >> Languages & Frameworks >> Go >> Go Libraries >> Project libraries 에 아래 경로 추가

경로 : Documents/go_workspace/src/k8s.io/kubernetes/Godeps/_workspace



[ Master Minion 서버에 모두 설치 ]

1. apt-get 으로 필요 s/w 설치

# docker 설치

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

$ sudo vi /etc/apt/sources.list.d/docker.list


# Debian Jessie

deb https://apt.dockerproject.org/repo debian-jessie main


# Debian Stretch/Sid

deb https://apt.dockerproject.org/repo debian-stretch main


# Ubuntu Precise

deb https://apt.dockerproject.org/repo ubuntu-precise main


# Ubuntu Trusty (14.04 LTS)

deb https://apt.dockerproject.org/repo ubuntu-trusty main


# Ubuntu Utopic (14.10)

deb https://apt.dockerproject.org/repo ubuntu-utopic main


# Ubuntu Vivid (15.04)

deb https://apt.dockerproject.org/repo ubuntu-vivid main


# Ubuntu Wily (15.10)

deb https://apt.dockerproject.org/repo ubuntu-wily main


# Ubuntu Xenial (16.04)

deb https://apt.dockerproject.org/repo ubuntu-xenial main


$ sudo apt-get update

$ sudo apt-get purge lxc-docker*

$ sudo apt-get purge docker.io

$ sudo apt-get autoremove

$ sudo apt-get install docker-engine


$ sudo apt-get install bridge-utils

$ sudo apt-get install curl

$ sudo usermod -a -G docker stack      # stack user에 docker 그룹을 추가

$ sudo systemctl start docker.service



2. go apt-get 설치

$ sudo apt-get install linux-libc-dev golang gcc

$ sudo apt-get install ansible



3. host 파일 등록 (모든 서버에, root 계정으로 수행)

echo "192.168.75.129 kube-master

192.168.75.130 kube-node01" >> /etc/hosts



[ Kubernetes Master 설치 ]


4. etcd 설치

https://github.com/coreos/etcd/releases

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.2/etcd-v2.2.2-linux-amd64.tar.gz -o etcd-v2.2.2-linux-amd64.tar.gz

$ tar xzvf etcd-v2.2.2-linux-amd64.tar.gz

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcd /usr/bin

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcdctl /usr/bin


$ sudo mkdir -p /var/lib/etcd/member

$ sudo chmod -R 777 /var/lib/etcd


$ sudo vi /etc/network-environment

# The master's IPv4 address - reachable by the kubernetes nodes.

NODE_NAME=kube-master

MASTER_NAME=kube-master

NODE_NAME_01=kube-node01


sudo vi /lib/systemd/system/etcd.service

[Unit]

Description=etcd

After=network-online.service


[Service]

EnvironmentFile=/etc/network-environment          # 혹은 /etc/default/etcd.conf

PermissionsStartOnly=true

ExecStart=/usr/bin/etcd \

--name ${NODE_NAME} \

--data-dir /var/lib/etcd \

--initial-advertise-peer-urls http://192.168.75.129:2380 \

--listen-peer-urls http://192.168.75.129:2380 \

--listen-client-urls http://192.168.75.129:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.75.129:2379 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster ${MASTER_NAME}=http://kube-master:2380,${NODE_NAME_01}=http://kube-node01:2380 \

--initial-cluster-state new

Restart=always

RestartSec=10s


[Install]

WantedBy=multi-user.target

Alias=etcd.service


$ cd /lib/systemd/system

$ sudo chmod 775 etcd.service


$ sudo systemctl enable etcd.service

sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl start etcd.service



$ etcdctl set /coreos.com/network/config "{\"Network\":\"172.16.0.0/16\"}"

$ etcdctl set /coreos.com/network/subnets/172.16.10.0-24 "{\"PublicIP\":\"192.168.75.129\"}"

$ etcdctl set /coreos.com/network/subnets/172.16.93.0-24 "{\"PublicIP\":\"192.168.75.130\"}"


$ etcdctl ls /                          # etcdctl ls --recursive (전체 다 보임)

/coreos.com/network/config

/coreos.com/network/subnets/172.16.10.0-24

/coreos.com/network/subnets/172.16.93.0-24

/registry


$ etcdctl get /coreos.com/network/config

{"Network":"172.16.0.0/16"}


$ etcdctl get /coreos.com/network/subnets/172.16.10.0-24     # Master의 flannel0 bridge ip

{"PublicIP":"192.168.75.129"}


$ etcdctl get /coreos.com/network/subnets/172.16.93.0-24     # Node01의 flannel0 bridge ip

{"PublicIP":"192.168.75.130"}



5. flannel 설치
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.4 tags/v0.5.4     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


$ sudo netstat -tulpn | grep etcd          # etcd 떠 있는 포트를 확인

sudo flanneld -etcd-endpoints=http://kube-master:4001 -v=0


$ cd /lib/systemd/system

$ sudo vi flanneld.service


[Unit]

Description=flanneld Service

After=etcd.service

Requires=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

PermissionsStartOnly=true

User=root

ExecStart=/usr/bin/flanneld \

-etcd-endpoints http://localhost:4001,http://localhost:2379 \

-v=0

Restart=always

RestartSec=10s

RemainAfterExit=yes


[Install]

WantedBy=multi-user.target

Alias=flanneld.service



$ sudo systemctl enable flanneld.service

$ sudo systemctl start flanneld.service



6. Kubernetes API Server 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.1 origin/release-1.1

$ sudo make release


$ cd _output/release-tars

$ sudo tar zxvf kubernetes-server-linux-amd64.tar.gz


$ cd ~

git clone https://github.com/kubernetes/contrib.git

$ sudo cp -R ~/downloads/kubernetes/_output/* ~/downloads/contrib/ansible/roles/

$ cd ~/downloads/contrib/ansible/roles

$ sudo chown stack.stack -R *

$ vi  ~/downloads/contrib/ansible/inventory

[masters]

kube-master


[etcd]

kube-master


[nodes]

kube-node01



$ sudo su -

# ssh-keygen

# for node in kube-master kube-node01; do

ssh-copy-id ${node}

done

# exit


$ vi ~/downloads/contrib/ansible/group_vars/all.yml

source_type: localBuild

cluster_name: cluster.local

ansible_ssh_user: root

kube_service_addresses: 10.254.0.0/16

networking: flannel

flannel_subnet: 172.16.0.0

flannel_prefix: 12

flannel_host_prefix: 24

cluster_logging: true

cluster_monitoring: true

kube-ui: true

dns_setup: true

dns_replicas: 1


$ cd ~/downloads/contrib/ansible

$ ./setup.sh








sudo cp kubernetes/server/bin/kube-apiserver /usr/bin

$ sudo cp kubernetes/server/bin/kube-controller-manager /usr/bin

$ sudo cp kubernetes/server/bin/kube-scheduler /usr/bin

sudo cp kubernetes/server/bin/kubectl /usr/bin

sudo cp kubernetes/server/bin/kubernetes /usr/bin


sudo mkdir -p /var/log/kubernetes

$ sudo chown -R stack.docker /var/log/kubernetes/


$ cd /lib/systemd/system

$ sudo vi kube-apiserver.service


[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

ExecStart=/usr/bin/kube-apiserver \

--api-rate=10 \

--bind-address=0.0.0.0 \

--etcd_servers=http://127.0.0.1:4001 \

--portal_net=10.254.0.0/16 \                              # 어디서 쓰는 거지?

--insecure-bind-address=0.0.0.0 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--kubelet_port=10250 \

--service_account_key_file=/tmp/kube-serviceaccount.key \

--service_account_lookup=false \

--service-cluster-ip-range=172.16.0.0/16            # flannel 과 연동해야 하나?

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-apiserver.service


$ sudo systemctl enable kube-apiserver.service

$ sudo systemctl start kube-apiserver.service


sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl restart kube-apiserver


6. Kubernetes Controller Manager 설치

$ cd /lib/systemd/system

sudo vi kube-controller-manager.service


[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

ExecStart=/usr/bin/kube-controller-manager \

--address=0.0.0.0 \

--master=127.0.0.1:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true 

#--service_account_private_key_file=/tmp/kube-serviceaccount.key

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-controller-manager.service


$ sudo systemctl enable kube-controller-manager.service

$ sudo systemctl start kube-controller-manager.service


$ sudo systemctl daemon-reload

$ sudo systemctl restart kube-controller-manager


7. Kubernetes Scheduler 설치

$ cd /lib/systemd/system

sudo vi kube-scheduler.service


[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

Requires=etcd.service

After=etcd.service


[Service]

ExecStart=/usr/bin/kube-scheduler \

--master=127.0.0.1:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-scheduler.service


sudo systemctl enable kube-scheduler.service

$ sudo systemctl start kube-scheduler.service


8. etcd 에 flannel 에서 사용할 ip range 등록  (flannel 을 node 에서 사용해야 필요함)

$ sudo etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'



[ Service Cluster IP Range ]

10.0.0.0 - 10.255.255.255 (10/8 prefix)

172.16.0.0 - 172.31.255.255 (172.16/12 prefix)

192.168.0.0 - 192.168.255.255 (192.168/16 prefix)




[ Kubernetes Minion 설치 ]


4. etcd 설치

https://github.com/coreos/etcd/releases

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.2/etcd-v2.2.2-linux-amd64.tar.gz -o etcd-v2.2.2-linux-amd64.tar.gz

$ tar xzvf etcd-v2.2.2-linux-amd64.tar.gz

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcd /usr/bin

$ sudo cp -f etcd-v2.2.2-linux-amd64/etcdctl /usr/bin


$ sudo mkdir -p /var/lib/etcd/member

$ sudo chmod -R 777 /var/lib/etcd


$ sudo vi /etc/network-environment

# The master's IPv4 address - reachable by the kubernetes nodes.

NODE_NAME=kube-node01

MASTER_NAME=kube-master

NODE_NAME_01=kube-node01


sudo vi /lib/systemd/system/etcd.service

[Unit]

Description=etcd

After=network-online.service


[Service]

EnvironmentFile=/etc/network-environment          # 혹은 /etc/default/etcd.conf

PermissionsStartOnly=true

ExecStart=/usr/bin/etcd \

--name ${NODE_NAME} \

--data-dir /var/lib/etcd \

--initial-advertise-peer-urls http://192.168.75.130:2380 \

--listen-peer-urls http://192.168.75.130:2380 \

--listen-client-urls http://192.168.75.130:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.75.130:2379 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster ${MASTER_NAME}=http://kube-master:2380,${NODE_NAME_01}=http://kube-node01:2380 \

--initial-cluster-state new

Restart=always

RestartSec=10s


[Install]

WantedBy=multi-user.target

Alias=etcd.service


$ cd /lib/systemd/system

$ sudo chmod 775 etcd.service


$ sudo systemctl enable etcd.service

sudo systemctl daemon-reload                        # 파일 수정 후에는 reload 필요

$ sudo systemctl start etcd.service


$ etcdctl member list


5. flannel 설치
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.5 tags/v0.5.5     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


$ sudo netstat -tulpn | grep etcd          # etcd 떠 있는 포트를 확인

sudo flanneld -etcd-endpoints=http://kube-node01:4001,http://kube-node01:2379 -v=0


$ cd /lib/systemd/system

$ sudo vi flanneld.service


[Unit]

Description=flanneld Service

After=etcd.service

Requires=etcd.service


[Service]

EnvironmentFile=/etc/network-environment

PermissionsStartOnly=true

User=root

ExecStart=/usr/bin/flanneld \

-etcd-endpoints http://kube-node01:4001,http://kube-node01:2379 \

-v=0

Restart=always

RestartSec=10s

RemainAfterExit=yes


[Install]

WantedBy=multi-user.target

Alias=flanneld.service



$ sudo systemctl enable flanneld.service

$ sudo systemctl start flanneld.service




8. Kubernetes Proxy 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ sudo make release


$ cd _output/release-tars

$ sudo tar xvf kubernetes-server-linux-amd64.tar.gz


sudo cp kubernetes/server/bin/kube-proxy /usr/bin

$ sudo cp kubernetes/server/bin/kubelet /usr/bin

sudo cp kubernetes/server/bin/kubectl /usr/bin

sudo cp kubernetes/server/bin/kubernetes /usr/bin


sudo mkdir -p /var/log/kubernetes

$ sudo chown -R stack.docker /var/log/kubernetes/


$ cd /lib/systemd/system

sudo vi kube-proxy.service


[Unit]

Description=Kubernetes Proxy

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

ExecStart=/usr/bin/kube-proxy \

--master=http://kube-master:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--v=0                                                     # debug 모드

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kube-proxy.service


$ sudo systemctl enable kube-proxy.service

$ sudo systemctl start kube-proxy.service



9. Kubernetes Kubelet 설치

$ cd /lib/systemd/system

sudo vi kubelet.service


[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

ExecStart=/usr/bin/kubelet \

--address=0.0.0.0 \

--port=10250 \

--hostname_override=kube-minion \

--api_servers=http://kube-master:8080 \

--log-dir=/var/log/kubernetes \

--logtostderr=true \

--cluster_domain=cluster.local \

--v=0                                                      # debug 모드

Restart=always

RestartSec=10


[Install]

WantedBy=multi-user.target

Alias=kubelet.service


$ sudo systemctl enable kubelet.service

$ sudo systemctl start kubelet.service


# docker 서비스 restart

$ sudo service docker restart

10. flannel 설치 (etcd 의 Network 등 설정 값을 가지고 옴) - 동작 확인 필요
$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.1 tags/v0.5.1     # git checkout -b release-0.5.4 origin/release-0.5.4

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ sudo cp -f bin/flanneld /usr/bin/.


sudo flanneld -etcd-endpoints=http://kube-master:4001 -v=0



10. 설치한 node 확인

sudo kubectl get nodes


NAME                 LABELS                                                    STATUS

192.168.75.202   kubernetes.io/hostname=192.168.75.202    NotReady

kube-minion        kubernetes.io/hostname=kube-minion         Ready


11. 서비스 올리기

# Master 서버

$ sudo systemctl start etcd.service

$ sudo systemctl start kube-apiserver.service

$ sudo systemctl start kube-controller-manager.service

$ sudo systemctl start kube-scheduler.service


# Minion 서버

$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kubelet.service



12. mysql 서비스 올리기

mkdir pods

$ pods

$ vi mysql.yaml

apiVersion: v1

kind: Pod

metadata:

  name: mysql

  labels:

    name: mysql

spec:

  containers:

    - resources:

        limits :

          cpu: 1

      image: mysql

      name: mysql

      env:

        - name: MYSQL_ROOT_PASSWORD

          # change this

          value: root

      ports:

        - containerPort: 3306

          name: mysql


$ sudo kubectl create -f mysql.yaml

$ sudo kubectl get pods


$ vi mysql-service.yaml

apiVersion: v1

kind: Service

metadata:

  labels:

    name: mysql

  name: mysql

spec:

  publicIPs:

    - 192.168.75.202

  ports:

    # the port that this service should serve on

    - port: 3306

  # label keys and values that must match in order to receive traffic for this service

  selector:

    name: mysql


$ sudo kubectl create -f mysql-service.yaml

$ sudo kubectl get services







**************************************************

*****  juju 로 설치  (실패)                               ***********

**************************************************

1. juju 설치

sudo add-apt-repository ppa:juju/stable

$ sudo apt-get update

$ sudo apt-get install juju-core juju-quickstart

juju quickstart u/kubernetes/kubernetes-cluster












**************************************************

*****  여기는 참고                                          ***********

**************************************************


3. flannel 설치

$ git clone https://github.com/coreos/flannel.git

$ cd flannel

$ git checkout -b v0.5.1 tags/v0.5.1

$ ./build                   # bin 디렉토리가 생기면서 flanneld 실행파일이 빌드됨 

$ cp bin/flanneld /opt/bin




4. etcd 설치

https://github.com/coreos/etcd/releases

$ curl -L  https://github.com/coreos/etcd/releases/download/v2.1.1/etcd-v2.1.1-linux-amd64.tar.gz -o etcd-v2.1.1-linux-amd64.tar.gz

$ tar xzvf etcd-v2.1.1-linux-amd64.tar.gz

$ sudo cp  etcd-v2.1.1-linux-amd64/bin/etcd* /opt/bin

$ cd /var/lib

$ sudo mkdir etcd

$ sudo chown stack.docker etcd

sudo mkdir /var/run/kubernetes

$ sudo chown stack.docker /var/run/kubernetes

sudo vi /etc/default/etcd

ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"



3. Kubernetes Master 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ cd cluster/ubuntu/

$ ./build.sh            # binaries 디렉토리로 다운로드 함


# Add binaries to /usr/bin

$ sudo cp -f binaries/master/* /usr/bin

$ sudo cp -f binaries/kubectl /usr/bin


$ wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz

$ tar -xvf master.tar.gz

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd


$ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment

$ vi network-environment

#! /usr/bin/bash

# This node's IPv4 address

DEFAULT_IPV4=192.168.75.201


# The kubernetes master IP

KUBERNETES_MASTER=192.168.75.201


# Location of etcd cluster used by Calico.  By default, this uses the etcd

# instance running on the Kubernetes Master

ETCD_AUTHORITY=192.168.75.201:4001


# The kubernetes-apiserver location - used by the calico plugin

KUBE_API_ROOT=https://192.168.75.201:443/api/v1/


$ sudo mv -f network-environment /etc



$ sudo systemctl enable /etc/systemd/etcd.service

$ sudo systemctl enable /etc/systemd/kube-apiserver.service

$ sudo systemctl enable /etc/systemd/kube-controller-manager.service

$ sudo systemctl enable /etc/systemd/kube-scheduler.service


$ sudo systemctl start etcd.service

$ sudo systemctl start kube-apiserver.service

$ sudo systemctl start kube-controller-manager.service

$ sudo systemctl start kube-scheduler.service






4. Kubernetes Minion 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

git checkout -b release-1.0 origin/release-1.0

$ cd cluster/ubuntu/

$ ./build.sh            # binaries 디렉토리로 다운로드 함


# Add binaries to /usr/bin

$ sudo cp -f binaries/minion/* /usr/bin


$ wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz

$ tar -xvf master.tar.gz

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd

$ sudo cp -f calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd


$ sudo systemctl enable /etc/systemd/kube-proxy.service

$ sudo systemctl enable /etc/systemd/kube-kubelet.service


$ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment

$ vi network-environment

#! /usr/bin/bash

# This node's IPv4 address

DEFAULT_IPV4=192.168.75.201


# The kubernetes master IP

KUBERNETES_MASTER=192.168.75.201


# Location of etcd cluster used by Calico.  By default, this uses the etcd

# instance running on the Kubernetes Master

ETCD_AUTHORITY=192.168.75.201:4001


# The kubernetes-apiserver location - used by the calico plugin

KUBE_API_ROOT=https://192.168.75.201:443/api/v1/


$ sudo mv -f network-environment /etc



$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kube-kubelet.service












4. kubernetes 설치

$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git

$ cd kubernetes

$ git checkout -b release-1.0 origin/release-1.0

$ sudo make release


$ cd _output/release-tars

$ sudo chown -R stack.docker *

$ tar xvf kubernetes-server-linux-amd64.tar.gz


$ sudo su -

$ echo "192.168.75.201 kube-master

192.168.75.202 kube-minion" >> /etc/hosts

$ exit





5. kubernetes Master 설치


# kube-master 에 뜨는 서비스

etcd

flanneld

kube-apiserver

kube-controller-manager

kube-scheduler


$ cd ~/kubernetes/_output/release-tars/kubernetes

$ cp server/bin/kube-apiserver /opt/bin/

$ cp server/bin/kube-controller-manager /opt/bin/

$ cp server/bin/kube-scheduler /opt/bin/

$ cp server/bin/kubectl /opt/bin/

$ cp server/bin/kubernetes /opt/bin/


$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/etcd.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-apiserver.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-controller-manager.conf /etc/init/

$ sudo cp kubernetes/cluster/ubuntu/master/init_conf/kube-scheduler.conf /etc/init/


$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/etcd /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-apiserver /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-controller-manager /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/master/init_scripts/kube-scheduler /etc/init.d/


$ sudo vi /etc/default/kube-apiserver

KUBE_API_ADDRESS="--address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet_port=10250"

KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"

KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"

KUBE_API_ARGS=""



$ sudo vi /etc/default/kube-controller-manager

KUBELET_ADDRESSES="--machines=192.168.75.202"






6. Minion 설치


# kube-minion 에 뜨는 서비스

flanneld

kubelet

kube-proxy


cd ~/kubernetes/_output/release-tars/kubernetes

sudo cp server/bin/kubelet /opt/bin/

$ sudo cp server/bin/kube-proxy /opt/bin/

$ sudo cp server/bin/kubectl /opt/bin/

$ sudo cp server/bin/kubernetes /opt/bin/


$ sudo cp kubernetes/cluster/ubuntu/minion/init_conf/kubelet.conf /etc/init

$ sudo cp kubernetes/cluster/ubuntu/minion/init_conf/kube-proxy.conf /etc/init


$ sudo cp kubernetes/cluster/ubuntu/minion/init_scripts/kubelet /etc/init.d/

$ sudo cp kubernetes/cluster/ubuntu/minion/init_scripts/kube-proxy /etc/init.d/












$ cd ~/kubernetes

$ vi cluster/ubuntu/config-default.sh

export nodes=${nodes:-"stack@192.168.75.201 stack@192.168.75.202"}

roles=${roles:-"ai i"}

export NUM_MINIONS=${NUM_MINIONS:-2}

export SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-192.168.3.0/24}

export FLANNEL_NET=${FLANNEL_NET:-172.16.0.0/16}


$ cd cluster

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh








3. go 소스 설치

https://golang.org/dl/

$ curl -L https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz -o go1.4.2.linux-amd64.tar.gz

$ tar xvf go1.4.2.linux-amd64.tar.gz

























Posted by Kubernetes Korea co-leader seungkyua@gmail.com

1. git 설치하기

# apt-get install git-core git-review

# adduser gerrit

# mkdir -p /git_repo

# chown -R gerrit.gerrit /git_repo

# sudo mkdir -p /git_review

chown -R gerrit.gerrit /git_review

# git init --bare /git_repo/paas.git


2. gerrit 다운로드

https://gerrit-releases.storage.googleapis.com/index.html


3. mysql 설치

# mysql -uroot -p

mysql> CREATE USER 'gerrit'@'localhost' IDENTIFIED BY 'secret';

mysql> CREATE DATABASE reviewdb;

mysql> ALTER DATABASE reviewdb charset=utf8;

mysql> GRANT ALL ON reviewdb.* TO 'gerrit'@'localhost';

mysql> FLUSH PRIVILEGES;



4. apache2 설치

$ sudo apt-get install apache2 apache2-utils libapache2-mod-proxy-html libxml2-dev

$ sudo a2enmod proxy_http

$ sudo a2enmod proxy

$ sudo service apache2 restart


# sudo vi /etc/apache2/sites-available/gerrit.conf

<VirtualHost *:8080>

  ServerName localhost

  ProxyRequests Off

  ProxyVia Off

  ProxyPreserveHost On


  <Proxy *>

    Order deny,allow

    Allow from all

  </Proxy>


  <Location /login/>

    AuthType Basic

    AuthName "Gerrit Code Review"

    Require valid-user

    AuthUserFile /git_review/etc/passwords

  </Location>


  AllowEncodedSlashes On

  ProxyPass / http://127.0.0.1:8081/

  ProxyPassReverse / http://127.0.0.1:8081/                #외부 SSO 검증에 기반한 HTTP 인증

#  RequestHeader set REMOTE-USER %{REMOTE_USER} #외부 SSO 검증에 기반한 HTTP 인증

</VirtualHost>


$ cd /etc/apache2/sites-available

$ sudo a2ensite gerrit.conf

$ sudo vi /etc/apache2/ports.conf

Listen 8080


$ sudo service apache2 restart




5. gerrit site 설치

# apt-get install openjdk-7-jdk


# oracle java 를 설치하는 방법

# add-apt-repository ppa:webupd8team/java

# apt-get udpate

# apt-get install oracle-java7-installer



# su - gerrit

$ cd /git_review

$ cp /home/stack/Downloads/gerrit-2.11.3.war .

$ java -jar gerrit-2.11.3.war init -d /git_review

 *** Git Repositories

*** 


Location of Git repositories   [git]: /git_repo


*** SQL Database

*** 


Database server type           [h2]: mysql


Gerrit Code Review is not shipped with MySQL Connector/J 5.1.21

**  This library is required for your configuration. **

Download and install it now [Y/n]?

Downloading http://repo2.maven.org/maven2/mysql/mysql-connector-java/5.1.21/mysql-connector-java-5.1.21.jar ... OK

Checksum mysql-connector-java-5.1.21.jar OK

Server hostname                [localhost]: 

Server port                    [(mysql default)]: 

Database name                  [reviewdb]: 

Database username              [gerrit]:

gerrit2's password            : secret


*** Index

*** 


Type                           [LUCENE/?]: 


The index must be rebuilt before starting Gerrit:

  java -jar gerrit.war reindex -d site_path


*** User Authentication

*** 


Authentication method          [OPENID/?]: http

# Get username from custom HTTP header [y/N]? y                    # 외부 SSO HTTP 인증시

# Username HTTP Header [SM_USER]: REMOTE_USER_RETURN    # 외부 SSO HTTP 인증시

SSO logout URL  : http://aa:aa@192.168.75.141:8080/


*** Review Labels

*** 


Install Verified label         [y/N]? 


*** Email Delivery

*** 


SMTP server hostname       [localhost]: smtp.gmail.com

SMTP server port               [(default)]: 465

SMTP encryption                [NONE/?]: SSL

SMTP username                 [gerrit]: skanddh@gmail.com


*** Container Process

*** 


Run as                         [gerrit]: 

Java runtime                   [/usr/local/jdk1.8.0_31/jre]: 

Copy gerrit-2.11.3.war to /git_review/bin/gerrit.war [Y/n]? 

Copying gerrit-2.11.3.war to /git_review/bin/gerrit.war


*** SSH Daemon

*** 


Listen on address              [*]: 

Listen on port                 [29418]: 


Gerrit Code Review is not shipped with Bouncy Castle Crypto SSL v151

  If available, Gerrit can take advantage of features

  in the library, but will also function without it.

Download and install it now [Y/n]? N


*** HTTP Daemon

*** 


Behind reverse proxy           [y/N]? y

Proxy uses SSL (https://)      [y/N]? 

Subdirectory on proxy server   [/]: 

Listen on address              [*]: 127.0.0.1        # reverse 이기 때문에

Listen on port                 [8081]: 

Canonical URL                  [http://127.0.0.1/]:


java -jar bin/gerrit.war reindex -d /git_review


htpasswd -c /git_review/etc/passwords skanddh

# service apache2 restart



6. start/stop Daemon

$ /git_review/bin/gerrit.sh restart

$ /git_review/bin/gerrit.sh start

$ /git_review/bin/gerrit.sh stop


$ sudo ln -snf /git_review/bin/gerrit.sh /etc/init.d/gerrit.sh

$ sudo ln -snf /etc/init.d/gerrit.sh /etc/rc3.d/S90gerrit



[ HTTPS 활성화 ]

$ vi gerrit.conf

[httpd]

         listenUrl = proxy-https://127.0.0.1:8081/


$ vi /etc/httpd/conf/httpd.conf

LoadModule ssl_module modules/mod_ssl.so

LoadModule mod_proxy modules/mod_proxy.so

<VirtualHost _default_:443>

SSLEngine on

SSLProtocol all -SSLv2

SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW

SSLCertificateFile /etc/pki/tls/certs/server.crt

SSLCertifacteKeyFile /etc/pki/tls/private/server.key

SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt

ProxyPass / http://127.0.0.1:8081/

ProxyPassReverse / http://127.0.0.1:8081/

</VirtualHost>


인증서 생성

$ sudo mkdir -p /etc/pki/tls/private

$ sudo mkdir -p /etc/pki/tls/certs

$ sudo openssl req -x509 -days 3650 \

-nodes -newkey rsa:2048 \

-keyout /etc/pki/tls/private/server.key -keyform pem \

-out /etc/pki/tls/certs/server.crt -outform pem



-----

Country Name (2 letter code) [AU]:KO

State or Province Name (full name) [Some-State]:Seoul

Locality Name (eg, city) []:Seoul

Organization Name (eg, company) [Internet Widgits Pty Ltd]:MyCompany

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:myhost.mycompany.com

Email Address []:admin@myhost.mycompany.com

$ cd /etc/pki/tls/certs

sudo cp server.crt server-chain.crt



user.email 과 user.name 등록

$ git config user.name "Seungkyu Ahn"

$ git config user.email "skanddh@gmail.com"


password 등록

git config credential.helper cache                             # default 15분 저장

$ git config credential.helper 'cache --timeout=3600'      # 1시간 저장


커밋 메세지 hook 설치

curl -Lo .git/hooks/commit-msg http://localhost:8080/tools/hooks/commit-msg

$ chmod +x .git/hooks/commit-msg


review (gerrit remote url 등록)

git remote add gerrit http://localhost:8080/hello-project


# 서버 프로젝트에 미리 등록해서 clone 시 다운받을 수 있도록 함

$ vi .gitreview


[gerrit]

host=localhost

port=8080

project=hello-project

defaultbranch=master


$ git checkout -b bug/1

수정1

$ git add

$ git commit

$ git review

수정2

$ git add

$ git commit --amend

$ git review



review (직접하는 방법)

$ git checkout -b bug/1

수정1

$ git add

$ git commit

git push origin HEAD:refs/for/master%topic=bug/1



[ Jenkins 설치 ]

jenkins tomcat 의 webapps 디렉토리에 다운로드

# adduser jenkins

# chown -R jenkins.jenkins apache-tomcat-8.0.26

# su - jenkins


http://jenkins-ci.org/

$ cd /usr/local/apache-tomcat-8.0.26/webapps

$ wget http://updates.jenkins-ci.org/download/war/1.580.1/jenkins.war

wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war                       # 최신 버전


tomcat 포트 및 URIEndoing 변경

$ vi /usr/local/apache-tomcat-8.0.26/conf/server.xml


<Connector port="7070" protocol="HTTP/1.1"

           connectionTimeout="20000"

           redirectPort="8443"

           URIEncoding="UTF-8" />


/usr/local/apache-tomcat-8.0.26/bin/startup.sh


jenkins 접속

http://192.168.75.141:7070/jenkins/


웹화면에서 보안 설정

(좌측메뉴) Jenkins 관리

Configure Global Security

  - Enable security

  - Security Realm : Jenkins’ own user database

  - Authorization : Matrix-based security

  - User/group to add: admin


저장 후 admin 계정으로 가입



[ Jenkins 연동 ]

1. 젠킨스 플러그인 설치

1. Jenkins Git Client plugin

2. Jenkins Git Plugin : 젠킨스와 깃을 연동

3. Jenkins Gerrit Trigger plugin : 게릿 변경시 패치 세트를 가져와 빌드하고 점수를 남김

4. Hudson Gerrit plugin : 깃 플러그인 설정을 가능


2. 게릿 트리거 플러그인

1. HTTP/S Canonical URL: 게릿의 변경 및 패치 세트를 가리키는 URL

2. SSH 접속 : 게릿에 연결하여 게릿으로부터의 이벤트를 감지


jenkins를 띄운 사용자로 ssh 키 생성 및 게릿에 젠킨스가 사용할 배치 사용자를 생성

jenkins 계정으로 jenkins 를 실행하면 아래 내부 사용자 생성이 필요없음

사용자가 다르면 게릿의 관리자 계정으로 create-account 명령을 실행해서 내부 사용자를 생성

$ skanddh 계정으로 로그인

ssh-keygen -t rsa

ssh -p 29418 skanddh@192.168.75.141


# skanddh 가 gerrit 의 관리자 계정이어야 하며 skanddh 계정으로 실행

$ sudo cat /home/jenkins/.ssh/id_rsa.pub | \

ssh -p 29418 skanddh@192.168.75.141 gerrit create-account \

--group "'Non-Interactive Users'" --full-name Jenkins \

--email jenkins@localhost.com \ --ssh-key - jenkins


All-Projects에 있는 Non-Interactive Users 그룹에 아래의 권한이 있는지 확인

1. Stream events 권한이 있으면 게릿의 변경 발생을 원격으로 감지

2. refs/* 에 Read 권한이 있으면 gerrit 저장소의 변경 사항을 읽고 clone 가능

3. refs/heads/* 에 대한 Label Code-Review(Verified) -1..+1 권한이 잇으면 변경에 대해 점수 부여 가능


게릿 트리거 플러그인 설정

jenkins url 접속

http://192.168.75.141:7070/jenkins/gerrit-trigger


1. URL 과 SSH 연결을 설정

    Name : Gerrit

    Hostname : 192.168.75.141

    Frontend URL : http://192.168.75.141:8080

    SSH Port : 29418

    Username : jenkins

    E-mail : jenkins@localhost.com

    SSH Keyfile : /home/jenkins/.ssh/id_rsa

    SSH Keyfile Password :


2. Test Connection 으로 테스트

3. 설정 페이지 맨 아래 Start/Stop 버튼으로 젠킨스 재시작


jenkins url 접속

http://192.168.75.141:7070/jenkins/gerrit_manual_trigger

Query 입력 란에 status:open 입력 -> Search 버튼 클릭

http://192.168.75.141:8080/#q/status:open,n,z 페이지에서 리뷰 대기 중인 변경 확인


게릿 트리거 설정

게릿 트리거 실행 조건을 SCM polling(또는 다른 트리거 정책)에서 Gerrit Event 로 변경

게릿 트리거 설정 부분에서 Advanced 버튼으로 게릿 조건을 지정


깃 플러그인 설정 (Hudson Gerrit plugin 을 설치해야 나옴)

깃 플러그인에서 게릿의 ref-spec 다음에 추가

Advanced 버튼 클릭 하여 깃 저장소 설정 변경

1. $GERRIT_REFSPEC 을 복제해야 할 깃 refspec 으로 지정

2. $GERRIT_PATCHSET_REVISION을 빌드할 깃 브랜치로 지정

3. 트리거 방식을 Gerrit trigger로 지정


아래 두가지 활성화

1. Wipe out workspace : 작업 공간 비우기

2. Use shallow clone : 얕은 복제 사용





Posted by Kubernetes Korea co-leader seungkyua@gmail.com