localconf

OpenStack 2016. 5. 27. 10:59
반응형

[[local|localrc]]

ADMIN_PASSWORD=secret

DATABASE_PASSWORD=$ADMIN_PASSWORD

RABBIT_PASSWORD=$ADMIN_PASSWORD

SERVICE_PASSWORD=$ADMIN_PASSWORD

HOST_IP=10.40.102.84 // VM IP로 변경

# Do not use Nova-Network

disable_service n-net

# Enable Neutron

ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3

## Neutron options

Q_USE_SECGROUP=True

FLOATING_RANGE="10.40.102.0/24"

FIXED_RANGE="10.0.0.0/24"

Q_FLOATING_ALLOCATION_POOL=start=10.40.102.250,end=10.40.102.254

PUBLIC_NETWORK_GATEWAY="10.40.102.1"

PUBLIC_INTERFACE=eth0

# Open vSwitch provider networking configuration

Q_USE_PROVIDERNET_FOR_PUBLIC=True

OVS_PHYSICAL_BRIDGE=br-ex

PUBLIC_BRIDGE=br-ex

OVS_BRIDGE_MAPPINGS=public:br-ex

# Disable Identity v2

ENABLE_IDENTITY_V2=False

반응형
Posted by seungkyua@gmail.com
,
반응형

1. 로그 로테이트 설정

    - 로그 파일이 쌓이는 것을 막아줌


2. Availability Zone 과 Aggregate Host 설정

    - VM 을 효율적으로 스케줄링 할 수 있음.


3. cpu, memory, disk ratio 설정

    - overcommit 을 고려


4. Nova Compute 에서 inject password 나 inject file 을 false 로 설정 

    - VM 부팅 속도를 빠르게 함


5. Cinder QoS, Network QoS 설정

    - 스토리지와 Network 의 QoS 설정으로 간섭을 최소화 함


6. Neutron Network 설정 정보

    - Provider Network 를 사용해야 tunneling 이 없어 속도가 빠름


7. live migration 설정

    - maxdowntime 을 적절히 설정해야 함


8. 캐시가 안되어 있는 새로운 이미지로 여러 VM 동시 생성 속도 측정

    - 이미지를 가져오는 이슈로 네트워크 대역폭을 다 소모할 수 있음

    - 사전에 이미지가 캐시되게 모든 host 에 해당 vm 을 미리 생성


9. VM 인스턴스 데이터가 저장되는 /var/lib/nova 의 디스크 사이즈가 충분한지 검증



To be continue ...











반응형
Posted by seungkyua@gmail.com
,

Docker 교육 on CentOS7

Container 2016. 4. 28. 09:41
반응형

[ VMWare Player Download ]

https://my.vmware.com/en/web/vmware/free#desktop_end_user_computing/

vmware_workstation_player/12_0


[ CentOS 7 다운로드 ]

http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso



[ Network 설정 ]

VMnet8 192.168.75.1


https://www.lesstif.com/pages/viewpage.action?pageId=13631535

https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-dhcp-configuring-client.html


vi /etc/sysconfig/network-scripts/ifcfg-eno16777728

TYPE=Ethernet

BOOTPROTO=none

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME=eno16777728

UUID=aa6807ce-8df6-428d-8af3-f21915570efb

DEVICE=eno16777728

ONBOOT=yes

PREFIX=24

GATEWAY=192.168.75.2

DNS1=192.168.75.2

IPADDR=192.168.75.133



service network restart



[ sudo 세팅 ]

stack   ALL=(ALL:ALL) NOPASSWD:ALL



[ 기술적 컴포넌트 ]

컨테이너 포맷 : libcontainer (네이티브 리눅스 컨테이너 포맷),  lxc (일반적인 컨테이너 포맷)

리눅스 커널 Namespace : 파일시스템, 프로세스, 네트워크간 독립

리눅스 커널 cgroups : CPU, 메모리 고립 및 그룹핑

copy-on-wirte(CoW) : 파일시스템이 복사-쓰기로 생성. 파일시스템이 레이어로 되어 있음

로그 : STDOUT, STDERR, STDIN 이 로그로 쓰여짐

Interactive shell : pseudo-tty 를 생성하여 STDIN 을 연결, 컨테이너와 Interactiv shell로 통신



[ Disk Type ]

AUFS, zfs, btrfs, vfs, Device-mapper, overlayfs



[ 구성 요소 ]

Docker client             : DOCKER_HOST=tcp://192.168.75.133:2375

Docker server            : /etc/sysconfig/docker, /etc/sysconfig/docker-network

Docker images

Docker container



[ Container boot 순서 ]

bootfs -> 컨테이너 메모리로 이동 -> bootfs umount -> initrd 가 사용하는 RAM 해제 ->
rootfs mount (읽기 전용 모드, os 이미지) -> 읽기,쓰기 파일 시스템 mount



[ Device Mapper ]

sudo yum install -y device-mapper


ls -l /sys/class/misc/device-mapper

sudo grep device-mapper /proc/devices

sudo modprobe dm_mod




[ 사전 필요 패키지 설치 ]

sudo yum -y update

sudo yum -y install git tree


yum whatprovides netstat

yum -y install net-tools


 

[ CentOS6 docker 설치 ]

$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/

epel-release-6-8.noarch.rpm

$ sudo yum -y install lxc-docker



## docker 설치

$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'

[dockerrepo]

name=Docker Repository

baseurl=https://yum.dockerproject.org/repo/main/centos/7/

enabled=1

gpgcheck=1

gpgkey=https://yum.dockerproject.org/gpg

EOF


$ sudo yum install docker-engine


## docker 옵션 수정

$ sudo vi /usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/docker daemon -H unix:///var/run/docker.sock \

-H tcp://0.0.0.0:2375


$ sudo systemctl daemon-reload

$ sudo systemctl restart docker

$ sudo systemctl status docker


## 부팅 때 자동으로 실행

$ sudo systemctl enable docker


## stack 유저에 docker 그룹을 추가

$ sudo usermod -aG docker stack



## docker overlayFS 적용

$ sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'

overlay

EOF



## reboot

$ sudo reboot



## OverlayFS 확인 

$ lsmod | grep overlay


$ sudo mkdir -p /etc/systemd/system/docker.service.d && sudo tee /etc/systemd/system/docker.service.d/override.conf <<- EOF

[Service]

ExecStart=

ExecStart=/usr/bin/docker daemon --storage-driver=overlay -H fd:// \

-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375

EOF


$ sudo systemctl daemon-reload

$ sudo systemctl restart docker




## client 접속

## 서버 옵션을 tcp 로 설정했을 때 아래로 접속

export DOCKER_HOST=tcp://192.168.75.133:2375

$ docker ps



## 서버 옵션을 아무 옵션을 안주면 unix socket 으로 접속

docker -H unix:///var/run/docker.sock ps


$ export DOCKER_HOST=unix:///var/run/docker.sock

$ docker ps


## 환경변수로 추가

$ vi ~/.bashrc

export DOCKER_HOST=tcp://192.168.75.133:2375





## docker uninstall

$ yum list installed | grep docker

$ sudo yum -y remove docker-engine.x86_6


## image, 컨테이너까지 삭제

$ sudo rm -rf /var/lib/docker




# yum 으로 docker 설치 후 최신으로 업그레이드

https://get.docker.com 접속해서 참고




## docker run 및 overlayfs 보기, 스토리지 선택

## docker 스토리지에 대한 설명

http://play.joinc.co.kr/w/man/12/docker/storage


## https://docs.docker.com/engine/userguide/storagedriver/selectadriver/


## Docker image 빌드

$ docker build -t example/docker-node-hello:latest .

$ docker build --no-cache -t example/docker-node-hello:latest .



## Docker run

$ docker run -d -p 8090:8080 --name node-hello example/docker-node-hello:latest


$ docker run -d -p 8090:8080 -e WHO="Seungkyu Ahn" --name node-hello \

          example/docker-node-hello:latest



## docker 안으로 들어가서 mounts 정보 확인

$ docker exec -it 3c3ca0ce3470 /bin/bash


root@3c3ca0ce3470:/data/app# touch bbb.txt

root@3c3ca0ce3470:/data/app# cat /proc/mounts | grep overlay

lowerdir=/var/lib/docker/overlay/

cc4f0662e566f0ad9069abfd523ff67c38a41488aaaa06d474cb027ca64cafa2/root

upperdir=/var/lib/docker/overlay/

ede9464970bb229267c8c548f8612e801002cec2d4f524378f5acb58ccde0d98/upper

workdir=/var/lib/docker/overlay/

ede9464970bb229267c8c548f8612e801002cec2d4f524378f5acb58ccde0d98/work




## docker volume 확인

# cd /var/lib/docker/overlay

# cd ede9464970bb229267c8c548f8612e801002cec2d4f524378f5acb58ccde0d98

# ls -al

-rw-r--r--.  1 root root   64 Jul 11 03:22 lower-id

drwx------.  2 root root    6 Jul 11 03:22 merged

drwxr-xr-x.  9 root root 4096 Jul 11 03:26 upper

drwx------.  3 root root   17 Jul 11 03:22 work


# cat lower-id

cc4f0662e566f0ad9069abfd523ff67c38a41488aaaa06d474cb027ca64cafa2


# find . -name bbb.txt

./upper/data/app/bbb.txt



## overlay volume 의 구조

기본 image volume : lower-id 의 root directory

컨테이너 volume : upper directory




$ docker 소스 다운로드

go get github.com/docker/docker


## dependency (cmd/docker/docker.go)

$ go get github.com/Sirupsen/logrus


## dependency (cmd/dockerd/daemon.go)

$ go get github.com/docker/distribution

$ go get github.com/docker/go-connections



## docker Contributor 가 되고 싶으면 아래 URL 참조

## https://github.com/docker/docker/tree/master/project

## github 에서 docker/docker 프로젝트를 fork


git clone https://github.com/seungkyua/docker.git docker-fork

$ cd docker-fork


git config --local user.name "Seungkyu Ahn"

$ git config --local user.email "seungkyua@gmail.com"


$ git remote add upstream https://github.com/docker/docker.git


$ git config --local -l

$ git remote -v


$ git checkout -b dry-run-test


$ git branch

* dry-run-test

  master


$ touch TEST.md


$ git add TEST.md



## -s 옵션은 커밋 메세지에 정보를 자동으로 넣어 줌.

## Signed-off-by: Seungkyu Ahn <seungkyua@gmail.com>

## commit 로그에 들어가야할 내용

## 버그 수정일 때

fixes #xxxx,  closes #xxxx


**- What I did**

**- How I did it**

**- How to verify it**

**- Description for the changelog**

<!--

Write a short (one line) summary that describes the changes in this

pull request for inclusion in the changelog:

-->



$ git commit -s -m "Making a dry run test."


$ git push --set-upstream origin dry-run-test

Username for 'https://github.com': seungkyua

Password for 'https://seungkyua@github.com':




## Mac 에 docker 설치

## https://docs.docker.com/machine/install-machine/

$ docker-machine create --driver virtualbox default-docker

docker-machine ls

$ docker-machine env default-docker

$ eval "$(docker-machine env default-docker)"



## contribute 계속

## build a development environment image and run it in a container.

$ make shell


## In docker container, make docker binary

root@143823c11fba:/go/src/github.com/docker/docker# hack/make.sh binary


## binary 복사

# cp bundles/1.12.0-dev/binary-client/docker* /usr/bin

# cp bundles/1.12.0-dev/binary-daemon/docker* /usr/bin


## docker running background

# docker daemon -D&




## 다시 docker 로 접속하여 파일 수정 후 컴파일

# vi api/client/container/attach.go

42           flags.BoolVar(&opts.noStdin, "no-stdin", false, "Do not attach STDIN \

43           (standard in)")


# hack/make.sh binary

# cp bundles/1.12.0-dev/binary-client/docker* /usr/bin

# cp bundles/1.12.0-dev/binary-daemon/docker* /usr/bin

# docker daemon -D&


## 변경된 내용 확인

# docker attach --help



## 테스트 (arguements 에 따라 수행하는 테스트의 종류가 다름)

## test : Run the unit, integration and docker-py tests.

## test-unit : Run just the unit tests.

## test-integration-cli : Run the test for the integration command line interface.

## test-docker-py : Run the tests for Docker API client.

$ make test



## development container 안에서 테스트 하는 방법

$ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker \

docker-dev:dry-run-test /bin/bash


## hack/make.sh 를 활용하되 dynbinary binary cross 는 반드시 target 으로 지정

# hack/make.sh dynbinary binary cross test-unit test-integration-cli test-docker-py

or unit test 만 수행

# hack/make.sh dynbinary binary cross test-unit



## Unit Test 수행

$ TESTDIRS='opts' TESTFLAGS='-test.run ^TestValidateIPAddress$' make test-unit



## Integration Test 수행

$ TESTFLAGS='-check.f DockerSuite.TestBuild*' make test-integration-cli

or development container 안에서 테스트

# TESTFLAGS='-check.f TestBuild*' hack/make.sh binary test-integration-cli



## 이슈를 생성하고 local branch, remote repository, docker repository 를 맞추는 방법

## https://docs.docker.com/opensource/workflow/find-an-issue/

## 이슈에 labels 은 자신의 상황에 맞게 두 종류를 붙혀야 함

exp/beginner, exp/intermediate, exp/expert

kind/bug, kind/docs, kind/enhancement, kind/feature, kind/question


## issue 에 #dibs 라고 코멘트를 달면 자기가 하겠다는 뜻임.


## master 로 체크아웃

$ git checkout master


## docker repository 로 부터 최신 코드를 local 로 맞춤

git fetch upstream master

$ git rebase upstream/master


## local 최신 코드를 remote repository 에 맞춤

$ git push origin master


## 이슈번호 11038 에 맞는 branch 생성

$ git checkout -b 11038-fix-rhel-link


## 혹시 몰라 docker repository 의 최신코드를 branch 에 맞춤

$ git rebase upstream/master





[ docker 가 안 뜰 때 or 에러 일 때 깨끗하게 지우기 ]

systemctl status docker.service     


# mount 에러 일 때

du -h /var/lib/docker/

/var/lib/docker/container/ 아래의 파일을 삭제

/var/lib/docker/devicemapper/metadata/ 아래의 파일을 삭제

/var/lib/docker/devicemapper/mnt/ 아래의 파일을 삭제

/var/lib/docker/volumes/ 아래 파일을 삭제

/var/lib/docker/graph/ 아래 파일을 삭제

/var/lib/docker/linkgraph.db 파일 삭제


/var/run/docker.pid 삭제

/var/run/docker.sock 삭제


# device mapper 삭제

lsblk

grep docker /proc/*/mounts

systemd-cgls

dmsetup ls

ls -al /dev/mapper/docker-*      # 결과 리스트를 $dm 이라 한다면

umount $dm

dmsetup remove $dm




[ Debug 설정 ]

# /usr/lib/systemd/system/docker.service 을 수정하면

/etc/systemd/system/multi-user.target.wants/docker.service 와

/lib/systemd/system/docker.service 도 자동으로 수정됨


vi /usr/lib/systemd/system/docker.service

...

ExecStart=/bin/sh -c 'DEBUG=1 /usr/bin/docker daemon $OPTIONS \

...


sudo systemctl daemon-reload

sudo systemctl restart docker



[ system service 확인 ]

systemctl list-units --type service

systemctl list-unit-files



[ docker 설치 Test ]

docker run --rm -ti centos:latest /bin/bash



[ Sample Dockerfile ]

FROM node:0.10

MAINTAINER Anna Doe <anna@example.com>

LABEL "rating"="Five Stars" "class"="First Class"


USER root

ENV AP /data/app

ENV SCPATH /etc/supervisor/conf.d

RUN apt-get -y update


# The daemons

RUN apt-get -y install supervisor

RUN mkdir -p /var/log/supervisor

   

# Supervisor Configuration

ADD ./supervisord/conf.d/* $SCPATH/


# Application Code

ADD *.js* $AP/

WORKDIR $AP

RUN npm install

CMD ["supervisord", "-n"]



git clone https://github.com/spkane/docker-node-hello.git

cd docker-node-hello


tree -a -I .git







[ Docker Hub Registry ]

# 사용자 로그인

docker login


Username: seungkyua

Password: 

Email: seungkyua@gmail.com

WARNING: login credentials saved in /root/.docker/config.json

Login Succeeded


# 사용자 로그아웃

docker logout


# hub 에 push

docker tag example/docker-node-hello seungkyua/docker-node-hello



# restart 옵션

docker run -ti --restart=on-failure:3 -m 200m --memory-swap=300m \

     progrium/stress --cpu 2 --io 1 --vm 2 --vm-bytes 128M --timeout 120s



# stop

docker stop -t 25 node-hello       #  stop 은 SIGTERM,   25초 기다리고 t 옵션은 SIGKILL




[ container, image, volume 삭제하기 ] 

# delete all stopped docker

docker rm $(docker ps -a -q)


# delete untagged images

docker rmi $(docker images -q -f "dangling=true")


# delete volumes

docker volume rm $(docker volume ls -qf dangling=true)



[ Docker 정보 ]

docker version

docker info




[ docker inspect ]

docker pull ubuntu:latest

docker run -d -t --name ubuntu ubuntu /bin/bash

docker inspect node-hello

docker inspect --format='{{.State.Running}}' node-hello

docker inspect -f '{{.State.Pid}}' node-hello

docker inspect -f '{{.NetworkSettings.IPAddress}}' node-hello

docker inspect -f '{{.Name}} {{.State.Running}}' ubuntu node-hello


# list all port bindings

docker inspect -f '{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' node-hello


# specific port mapping

docker inspect -f '{{(index (index .NetworkSettings.Ports "8080/tcp") 0).HostPort}}' node-hello


# json print

docker inspect -f '{{json .Config}}' node-hello | python -mjson.tool



[ Docker 안으로 들어가는 두가지 방법 ]

docker exec -it ubuntu /bin/bash


docker inspect ubuntu | grep \"Pid\":

sudo nsenter --target [Pid] --mount --uts --ipc --net --pid



[ docker logs & stats ]

docker logs node-hello

docker stats node-hello


curl -s http://192.168.75.133:2375/v1.21/containers/node-hello/stats | head -1 | python -mjson.tool


# cAdvisor

docker run \

     --volume=/:/rootfs:ro \

     --volume=/var/run:/var/run:rw \

     --volume=/sys:/sys:ro \

     --volume=/var/lib/docker/:/var/lib/docker:ro \

     --publish=8091:8080 \

     --detach=true \

     --name=cadvisor \

     google/cadvisor:latest




[ ssh dockerfile ]

vi Dockerfile


FROM ubuntu:14.04

MAINTAINER Sven Dowideit <SvenDowideit@docker.com>

ENV REFRESHED_AT 2016-04-30


RUN apt-get update && apt-get install -y openssh-server

RUN mkdir /var/run/sshd

RUN echo 'root:screencast' | chpasswd

RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config


# SSH login fix. Otherwise user is kicked off after login

RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd


ENV NOTVISIBLE "in users profile"

RUN echo "export VISIBLE=now" >> /etc/profile


EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]



docker build -t example/sshd .

docker run -d -P --name sshd example/sshd

docker port sshd



# docker 이미지 생성과정 보기

docker history 3ae93df2b9a5



[ Dockerfile 명령어 설명 ]

CMD : 컨테이너를 런칭할 때 실행하는 명령어, 하나만 설정할 수 있음

          docker run -d --name sshd example/sshd /usr/sbin/sshd -D 와 같이 오버라이드 가능


ENTRYPOINT : docker run 의 마지막 인자를 활용하여 명령어를 오버라이드할 수 없음

                      ENTRYPOINT ["/usr/sbin/sshd"]

                      CMD ["-T"]

                     docker run 의 마지막 인자를 -D 로 주면 -T 가 오버라이드되어 포그라운드로 실행


WORKDIR : working directory 변경

                  WORKDIR /opt/webapp/db

                  docker run -it -w /var/log ubuntu pwd  도커실행 디렉토리가 /var/log 가 된다


ENV : 환경 변수를 설정하는데 사용, 그 다음 RUN 명령어를 위해 사용됨, 컨테이너에서 지속됨

         ENV RVM_PATH /home/rvm/

         RUN gem install unicorn

         RVM_PATH=/home/rvm/ gem install unicorn 와 같음

        docker run -it -e "WEB_PORT=8080" ubuntu env 를 하면 WEB_PORT가 설정되어 있음


USER : 이미지를 실행시키는 사용자

           USER nginx

           docker run -d -u nginx example/nginx 과 같이 -u 로 오버라이드 가능


VOLUME : 컨테이너에 볼륨을 추가한다. host 의 볼륨은 /var/lib/docker/volumes/ 여기에 위치

               여러 볼륨을 배열로 지정할 수 있다. 도커안에서 해당 위치로 볼륨 접근이 가능하다.

               VOLUME ["/opt/project", "/data" ]

               docker run -it -v /opt/project -v /data ubuntu /bin/bash 와 같이 -v 옵션과 동일


ADD : 파일과 디렉토리를 복사한다.

         ADD ../app /opt/

         host 빌드 디렉토리 위의 app 디렉토리를 컨테이너의 /opt/ 디렉토리에 복사한다.

         목적지가 / 로 끝나면 소스가 디렉토리라는 의미하고 목적지가 / 가 없으면 파일을 의미한다.

         ADD latest.tar.gz /var/www/wordpress/ 

         latest.tar.gz 압축파일을 /var/www/wordpress/ 디렉토리에 해제한다.

         목적 디렉토리에 같은 이름을 갖는 파일이나 디렉토리가 존재하면 덮어쓰지는 않는다.

         목적 경로가 존재하지 않으면 모드는 0755, UID와 GID 는 0으로 새롭게 생성된다. 


COPY : ADD와 비슷하나 빌드 디렉토리 밖의 파일을 복사 못하고 추출이나 압축해제 기능은 없다.

           COPY conf.d/ /etc/apache2/

           빌드 디렉토리 내의 파일, 디렉토리만 복사할 수 있으며, 파일시스템의 메타데이터도 복사

           UID 와 GID 는 0 로 된다.


ONBUILD : 이미지에 트리거를 추가한다. 새로운 명령어 빌드 과정에 삽입한다.

                ONBUILD ADD . /app/src

                ONBUILD RUN cd /app/src && make

                해당 이미지를 상속해서 새로운 이미지를 빌드할 때 자동으로 ADD 와 RUN 이 실행

                트리거는 한 번만 상속된다.




[ Dockerfile ]

# github 에서 다운로드

$ git clone https://github.com/jamtur01/dockerbook-code

$ cd dockerbook-code/code/5/website

$ docker build -t example/nginx .

$ docker run -d -p 80 --name website \

   -v $PWD/website:/var/www/html/webiste:ro example/nginx nginx


or



# nginx

$ vi Dockerfile

FROM ubuntu:14.04

MAINTAINER James Turnbull "james@example.com"

ENV REFRESHED_AT 2014-06-01


RUN apt-get update

RUN apt-get -y -q install nginx


RUN mkdir -p /var/www/html/website

ADD nginx/global.conf /etc/nginx/conf.d/

ADD nginx/nginx.conf /etc/nginx/


EXPOSE 80




$ vi nginx/global.conf

server {

        listen          0.0.0.0:80;

        server_name     _;


        root            /var/www/html/website;

        index           index.html index.htm;


        access_log      /var/log/nginx/default_access.log;

        error_log       /var/log/nginx/default_error.log;

}





$ vi nginx/nginx.conf

user www-data;

worker_processes 4;

pid /run/nginx.pid;

daemon off;


events {  }


http {

  sendfile on;

  tcp_nopush on;

  tcp_nodelay on;

  keepalive_timeout 65;

  types_hash_max_size 2048;

  include /etc/nginx/mime.types;

  default_type application/octet-stream;

  access_log /var/log/nginx/access.log;

  error_log /var/log/nginx/error.log;

  gzip on;

  gzip_disable "msie6";

  include /etc/nginx/conf.d/*.conf;

}



$ docker build -t exmaple/nginx .


$ docker history -H --no-trunc=true 3e1cdbcccf11


$ mkdir website; cd website

$ wget https://raw.githubusercontent.com/jamtur01/dockerbook-\

code/master/code/5/website/website/index.html


$ cd ..

$ docker run -d -p 80 --name website \

   -v $PWD/website:/var/www/html/webiste:ro example/nginx nginx





Jenkins by 구교준 (@Bliexsoft)

$ cd ~/Documents/Docker

$ git clone https://github.com/jenkinsci/docker.git jenkinsci


## 아래  Maven 설치 추가

$ vi Dockerfile

67 # Install Maven - Start

68 USER root

69 

70 ENV MAVEN_VERSION 3.3.9

71 

72 RUN mkdir -p /usr/share/maven \

   && curl -fsSL http://apache.osuosl.org/maven/maven-3/$MAVEN_VERSION/\
binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz \

       | tar -xzC /usr/share/maven --strip-components=1 \

         && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

76 

77         ENV MAVEN_HOME /usr/share/maven

78 

79         VOLUME /root/.m2

80 # Install Maven - End

81 

82 # Setting Jenkins

83 USER jenkins

84 

85 COPY config.xml /var/jenkins_home/config.xml

86 COPY hudson.tasks.Maven.xml /var/jenkins_home/hudson.tasks.Maven.xml

87 

88 COPY plugins.txt /usr/share/jenkins/ref/

89 RUN /usr/local/bin/plugins.sh /usr/share/jenkins/ref/plugins.txt




$ vi config.xml

1 <?xml version='1.0' encoding='UTF-8'?>

2 <hudson>

3   <disabledAdministrativeMonitors/>

4   <version>1.651.2</version>

5   <numExecutors>2</numExecutors>

6   <mode>NORMAL</mode>

7   <useSecurity>true</useSecurity>

8   <authorizationStrategy 

                   class="hudson.security.AuthorizationStrategy$Unsecured"/>

9   <securityRealm class="hudson.security.SecurityRealm$None"/>

10   <disableRememberMe>false</disableRememberMe>

11   <projectNamingStrategy 

       class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>

12   <workspaceDir>${JENKINS_HOME}/workspace/${ITEM_FULLNAME}

       </workspaceDir>

13   <buildsDir>${ITEM_ROOTDIR}/builds</buildsDir>

14   <jdks>

15     <jdk>

16       <name>jdk8</name>

17       <home>/usr/lib/jvm/java-8-openjdk-amd64</home>

18       <properties/>

19     </jdk>

20   </jdks>

21   <viewsTabBar class="hudson.views.DefaultViewsTabBar"/>

22   <myViewsTabBar class="hudson.views.DefaultMyViewsTabBar"/>

23   <clouds/>

24   <quietPeriod>5</quietPeriod>

25   <scmCheckoutRetryCount>0</scmCheckoutRetryCount>

26   <views>

27     <hudson.model.AllView>

28       <owner class="hudson" reference="../../.."/>

29       <name>All</name>

30       <filterExecutors>false</filterExecutors>

31       <filterQueue>false</filterQueue>

32       <properties class="hudson.model.View$PropertyList"/>

33     </hudson.model.AllView>

34   </views>

35   <primaryView>All</primaryView>

36   <slaveAgentPort>50000</slaveAgentPort>

37   <label></label>

38   <nodeProperties/>

39   <globalNodeProperties/>

40 </hudson>



$ vi hudson.tasks.Maven.xml

1 <?xml version='1.0' encoding='UTF-8'?>

2 <hudson.tasks.Maven_-DescriptorImpl>

3     <installations>

4         <hudson.tasks.Maven_-MavenInstallation>

5             <name>maven3.3.9</name>

6             <home>/usr/share/maven</home>

7             <properties/>

8         </hudson.tasks.Maven_-MavenInstallation>

9     </installations>

10 </hudson.tasks.Maven_-DescriptorImpl>



$ vi plugins.txt

maven-plugin:2.13

credentials:2.0.7

plain-credentials:1.1

token-macro:1.12.1

cloudfoundry:1.5

klocwork:1.18

ssh-credentials:1.11

matrix-project:1.6

mailer:1.16

scm-api:1.0

promoted-builds:2.25

parameterized-trigger:2.4

git-client:1.19.6

git:2.4.4

github-api:1.75

github:1.19.1










[ Docker in docker ]

https://github.com/jpetazzo/dind




[ Jenkins ]

$ cd dockerbook-code/code/5/jenkins

$ vi Dockerfile

FROM ubuntu:14.04

MAINTAINER james@example.com

ENV REFRESHED_AT 2014-06-01


RUN apt-get update -qq && apt-get install -qqy curl apt-transport-https

RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 \

--recv-keys 58118E89F3A912897C070ADBF76221572C52609D

RUN echo deb https://apt.dockerproject.org/repo ubuntu-trusty main > \

/etc/apt/sources.list.d/docker.list

RUN apt-get update -qq && \

apt-get install -qqy iptables ca-certificates openjdk-7-jdk git-core docker-engine


ENV JENKINS_HOME /opt/jenkins/data

ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org


RUN mkdir -p $JENKINS_HOME/plugins

RUN curl -sf -o /opt/jenkins/jenkins.war -L \

$JENKINS_MIRROR/war-stable/latest/jenkins.war


RUN for plugin in chucknorris greenballs scm-api git-client git ws-cleanup ;\

    do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi \

       -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ; done


ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh

RUN chmod +x /usr/local/bin/dockerjenkins.sh


VOLUME /var/lib/docker


EXPOSE 8080


ENTRYPOINT [ "/usr/local/bin/dockerjenkins.sh" ]




$ vi dockerjenkins.sh

#!/bin/bash


# First, make sure that cgroups are mounted correctly.

CGROUP=/sys/fs/cgroup


[ -d $CGROUP ] ||

  mkdir $CGROUP


mountpoint -q $CGROUP ||

  mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {

    echo "Could not make a tmpfs mount. Did you use -privileged?"

    exit 1

  }


# Mount the cgroup hierarchies exactly as they are in the parent system.

for SUBSYS in $(cut -d: -f2 /proc/1/cgroup)

do

  [ -d $CGROUP/$SUBSYS ] || mkdir $CGROUP/$SUBSYS

  mountpoint -q $CGROUP/$SUBSYS ||

    mount -n -t cgroup -o $SUBSYS cgroup $CGROUP/$SUBSYS

done


# Now, close extraneous file descriptors.

pushd /proc/self/fd

for FD in *

do

  case "$FD" in

  # Keep stdin/stdout/stderr

  [012])

    ;;

  # Nuke everything else

  *)

    eval exec "$FD>&-"

    ;;

  esac

done

popd


docker daemon &

exec java -jar /opt/jenkins/jenkins.war







$ docker build -t example/dockerjenkins .

$ docker run -p 8080:8080 --name jenkins --privileged -d example/dockerjenkins




## 다른 Host 의 docker daemon 에 접속하기 (cert 는 안해도 됨)

## centos

export DOCKER_HOST=tcp://192.168.75.133:2375



## boot2docker

$ export DOCKER_HOST=tcp://192.168.59.103:2376

export DOCKER_TLS_VERIFY=1 

export DOCKER_CERT_PATH=/Users/ahnsk/.boot2docker/certs/boot2docker-vm











반응형
Posted by seungkyua@gmail.com
,
반응형

[ Server IP 정보 ]

eth0 : NAT type         (vmnet2)  192.168.75.138        Public Network

eth1 : Host-only type (vmnet3)  192.168.230.138      Private Network

[ Multi Node 의 경우 두번째 추가 Compute Node ]
eth0 : NAT type         (vmnet2)  192.168.75.139       Public Network
eth1 : Host-only type (vmnet3)  192.168.230.139      Private Network

[ User 선택 ]
stack 유저로 생성

[ visudo 세팅 ]
stack   ALL=(ALL:ALL) NOPASSWD:ALL

[ vi /etc/network/interfaces ]
auto lo
iface lo inet loopback

auto ens33
iface ens33 inet static
        address 192.168.75.138
        netmask 255.255.255.0
        gateway 192.168.75.2
        dns-nameservers 8.8.8.8 8.8.4.4

auto ens34
iface ens34 inet static
        address 192.168.230.138
        netmask 255.255.255.0


[ Host 변경 ]
mkdir -p ~/Documents/scripts
cd ~/Documents/scripts

vi servers.txt
192.168.230.138 devstack01
192.168.230.139 devstack02

vi 01-hosts-setup.sh
#!/bin/bash

SERVERLIST=$HOME/Documents/scripts/servers.txt
MASTER_IP="192.168.230.138"
MASTER_HOSTNAME="devstack01"
SSH_USER="stack"

function set_sshkey() {
    local server=$1
    if [[ $server == "$MASTER_IP" ]]; then
        if [[ ! -f "${HOME}/.ssh/id_rsa" ]]; then
            yes "" | ssh-keygen -t rsa -N ""
        else
            echo "skip to create ssh-keygen"
        fi
    fi
    cat ~/.ssh/id_rsa.pub | ssh $SSH_USER@$server -oStrictHostKeyChecking=no \
        "if [ ! -f ~/.ssh/authorized_keys ] || ! grep -q ${MASTER_HOSTNAME} ~/.ssh/authorized_keys; then \
             umask 077; test -d .ssh || mkdir -p .ssh; cat >> ~/.ssh/authorized_keys; \
         fi"
    echo "$server ssh-key ..... done"
}

function change_hostname() {
    local server=$1
    local hostname=$2
    echo ${hostname} | ssh $SSH_USER@$server \
    "if ! grep -q ${hostname} /etc/hostname; then \
         sudo su -c 'cat > /etc/hostname'; \
         sudo hostname -F /etc/hostname;
     fi"
    echo "$server $hostname ..... done"
}

function change_hostfile() {
    local server=$1
    cat servers.txt | ssh $SSH_USER@$server \
    "if ! grep -q ${MASTER_HOSTNAME} /etc/hosts; then \
         sudo su -c 'cat >> /etc/hosts';
     fi"
    echo "$server hostfile .... done"
}

echo "setting sshkey ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        set_sshkey $server
    fi
done < $SERVERLIST

echo "changing hostname ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        hostname=$(echo $line | awk '{print $2}')
        change_hostname $server $hostname
    fi
done < $SERVERLIST

echo "changing hosts file ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        change_hostfile $server
    fi
done < $SERVERLIST



[ NTP 세팅 ]
vi 02-ntp-setup.sh
#!/bin/bash

SERVERLIST=$HOME/Documents/scripts/servers.txt
MASTER_IP="192.168.230.138"
SSH_USER="stack"

function ntp_master_setup() {
    local server=$1
    echo $server | ssh ${SSH_USER}@$server \
    "sudo apt-get update; \
     sudo apt-get install -y bridge-utils libvirt-bin ntp ntpdate; \
     if ! grep -q 'server 127.127.1.0' /etc/ntp.conf; then \
         sudo sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 3.ubuntu.pool.ntp.org/server time.bora.net/g' /etc/ntp.conf; \
         sudo sed -i 's/server ntp.ubuntu.com/server 127.127.1.0/g' /etc/ntp.conf; \
         sudo sed -i 's/restrict 127.0.0.1/restrict 192.168.0.0 mask 255.255.0.0 nomodify notrap/g' /etc/ntp.conf; \
         sudo service ntp restart; \
     fi; \
     sudo ntpdate -u time.bora.net; \
     sudo virsh net-destroy default; \
     sudo virsh net-undefine default"
}

function ntp_slave_setup() {
    local server=$1
    echo $server | ssh ${SSH_USER}@$server \
    "sudo apt-get update; \
     sudo apt-get install -y bridge-utils libvirt-bin ntp ntpdate; \
     if ! grep -c ${MASTER_IP} /etc/ntp.conf; then \
         sudo sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server ntp.ubuntu.com/server $MASTER_IP/g' /etc/ntp.conf; \
         sudo service ntp restart; \
     fi; \
     sudo ntpdate -u $MASTER_IP; \
     sudo virsh net-destroy default; \
     sudo virsh net-undefine default"
}

echo "ntp master setting ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        if [[ $server == "$MASTER_IP" ]]; then
            ntp_master_setup $server
        fi
    fi
done < $SERVERLIST

echo "ntp slave setting ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        if [[ $server != "$MASTER_IP" ]]; then
            ntp_slave_setup $server
        fi
    fi
done < $SERVERLIST



[ local.conf 파일 ]
mkdir -p ~/Documents/github
cd github
git clone https://github.com/openstack-dev/devstack.git
cd devstack

vi local.conf
[[local|localrc]]
HOST_IP=192.168.75.138
SERVICE_HOST=192.168.75.138
MYSQL_HOST=192.168.75.138
RABBIT_HOST=192.168.75.138
GLANCE_HOSTPORT=192.168.75.138:9292
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret

# Do not use Nova-Network
disable_service n-net

# Neutron service
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

# Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="192.168.75.0/24"
FIXED_RANGE="10.0.0.0/24"
Q_FLOATING_ALLOCATION_POOL=start=192.168.75.193,end=192.168.75.254
PUBLIC_NETWORK_GATEWAY="192.168.75.2"
Q_L3_ENABLED=True
PUBLIC_INTERFACE=ens33

# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

# Nova service
enable_service n-api
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service n-novnc
enable_service n-cauth

# Cinder service
enable_service cinder
enable_service c-api
enable_service c-vol
enable_service c-sch
enable_service c-bak

# Tempest service
enable_service tempest

# Swift service
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account

# Heat service
enable_service heat
enable_service h-api
enable_service h-api-cfn
enable_service h-api-cw
enable_service h-eng

# Enable plugin neutron-lbaas, octavia
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas master
enable_plugin octavia https://git.openstack.org/openstack/octavia

# Enable plugin Magnum
#enable_plugin magnum https://github.com/openstack/magnum master
#enable_plugin magnum-ui https://github.com/openstack/magnum-ui master

# Enable plugin Monasca (Ubuntu 16.04 사용 시 Systemctl 에 맞게 수정 필요)
enable_plugin monasca-api https://github.com/openstack/monasca-api master
enable_plugin monasca-log-api https://github.com/openstack/monasca-log-api master

MONASCA_API_IMPLEMENTATION_LANG=\

${MONASCA_API_IMPLEMENTATION_LANG:-python}

MONASCA_PERSISTER_IMPLEMENTATION_LANG=\

${MONASCA_PERSISTER_IMPLEMENTATION_LANG:-python}

MONASCA_METRICS_DB=${MONASCA_METRICS_DB:-influxdb}



# Cinder configuration
VOLUME_GROUP="cinder-volumes"
VOLUME_NAME_PREFIX="volume-"

# Images
# Use this image when creating test instances
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
# Use this image when working with Orchestration (Heat)
IMAGE_URLS+=",https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2"

KEYSTONE_CATALOG_BACKEND=sql
API_RATE_LIMIT=False
SWIFT_HASH=testing
SWIFT_REPLICAS=1
VOLUME_BACKING_FILE_SIZE=70000M

LOGFILE=$DEST/logs/stack.sh.log

# A clean install every time
RECLONE=yes



[ Compute Node 추가 ]
vi local.conf
[[local|localrc]]
HOST_IP=192.168.75.139
SERVICE_HOST=192.168.75.138
MYSQL_HOST=192.168.75.138
RABBIT_HOST=192.168.75.138
GLANCE_HOSTPORT=192.168.75.138:9292
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret

# Neutron options
PUBLIC_INTERFACE=ens33
ENABLED_SERVICES=n-cpu,n-novnc,rabbit,q-agt

LOGFILE=$DEST/logs/stack.sh.log



[ 설치 실행 ]
./stack.sh


[ 스토리지 마운트 ]
sudo mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8 /opt/stack/data/swift/drives/images/swift.img /opt/stack/data/swift/drives/sdb1

sudo losetup /dev/loop1 /opt/stack/data/cinder-volumes-default-backing-file

sudo losetup /dev/loop2 /opt/stack/data/cinder-volumes-lvmdriver-1-backing-file


[ CPU, Ram, Disk Overcommit 세팅 ]
vi /etc/nova/nova.conf

scheduler_default_filters = ..., CoreFilter          # CoreFilter 추가
cpu_allocation_ratio=50.0
ram_allocation_ratio=16.0
disk_allocation_ratio=50.0


[ 서비스 실행 ]
screen -c stack-screenrc


[ VM 생성 ]
. openrc admin demo


openstack project list
openstack security group list

# default sec group rule 추가
openstack security group rule create --proto icmp --src-ip 0.0.0.0/0 --dst-port -1 --ingress 2d95031b-132b-4d46-aacd-f392cdd8c4fb

openstack security group rule create --proto tcp --src-ip 0.0.0.0/0 --dst-port 1:65535 --ingress 2d95031b-132b-4d46-aacd-f392cdd8c4fb

# private key 생성
openstack keypair create --public-key ~/.ssh/id_rsa.pub magnum-key


openstack flavor list
openstack image list
openstack network list

# nova boot
openstack server create --image 7e688989-e59b-4b20-a562-1de946ee91e9 --flavor m1.tiny  --nic net-id=f57b8f2c-cd67-4d49-b38c-393dbb773c9b  --key-name magnum-key --security-group default test-01


# floating ip 생성 및 서버 할당
openstack ip floating create public
openstack ip floating list
openstack ip floating add 192.168.75.194 test-01


# Router 보기
sudo ip netns
qdhcp-f57b8f2c-cd67-4d49-b38c-393dbb773c9b
qrouter-b46e14d5-4ef5-4bfa-8dc3-463a982688ab


[ tcpdump 방법 ]
# Compute Node
[vm] -> tap:[qbrb97b5aa3-f8 Linux Bridge]:qvbb97b5aa3-f8 -> qvob97b5aa3-f8:[OVS br-int Bridge]:patch-tun -> patch-int:[OVS br-tun Bridge]:br-tun ->

# Network Node
br-tun:OVS br-tun Bridge:patch-int -> patch-tun:OVS br-int Bridge:qr-c163af1e-53 -> 
qr-c163af1e-53:qrouter(Namespace) -> qg-d8187261-68:qg(Namespace) -> 
qg-d8187261-68:OVS br-int Bridge:int-br-ex -> phy-br-ex:OVS br-ex Bridge -> NIC 

sudo tcpdump -n -e -i qbrb97b5aa3-f8 | grep 10.0.0.3
sudo tcpdump -n -e -i qvbb97b5aa3-f8 | grep 10.0.0.3
sudo tcpdump -n -e -i qvob97b5aa3-f8 | grep 10.0.0.3
sudo ip netns exec qrouter-b46e14d5-4ef5-4bfa-8dc3-463a982688ab tcpdump -n -e -i qr-c163af1e-53 | grep 10.0.0.3



[ Magnum k8s 생성 ]
cd ~/Documents/github/devstack/files
wget https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2
glance image-create --name fedora-21-atomic-5 \
                    --visibility public \
                    --disk-format qcow2 \
                    --os-distro fedora-atomic \
                    --container-format bare < fedora-21-atomic-5.qcow2


magnum service-list

magnum baymodel-create --name k8sbaymodel \
                       --image-id fedora-21-atomic-5 \
                       --keypair-id magnum-key \
                       --external-network-id public \
                       --dns-nameserver 8.8.8.8 \
                       --flavor-id m1.small \
                       --docker-volume-size 5 \
                       --network-driver flannel \
                       --coe kubernetes

magnum baymodel-list
magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1

neutron lb-pool-list
neutron lb-vip-list
neutron lb-member-list

magnum bay-list


[ magnum 클러스터 생성 에러 시 수동으로 할 때 삭제해야 할 것 ]
floating ip  삭제 - api-pool-vip,  kube-master, kube-node
openstack ip floating list
sudo ip netns exec qrouter-2f49aeb4-421c-4994-923a-5aafe453fa3d ip a

api.pool.vip 삭제
neutron lb-vip-list
neutron lb-pool-list
neutron lb-member-list

# private network 삭제
openstack network list

# router 삭제, external gateway 삭제
openstack router list
openstack port list
openstack router remove port        (gateway 를 제거)
openstack router remove subnet    (subnet 을 제거)











반응형
Posted by seungkyua@gmail.com
,
반응형

ca-key.pem -> ca.pem

server-key.pem -> server.csr -> server.csr + (ca-key.pem + ca.pem) -> server.cert

client-key.pem -> client.csr -> client.csr + (ca-key.pem + ca.pem) -> client.cert



[ CA 생성 ]


1. ca-key.pem => ca.pem    (ca.crt: client ca 파일)

$ sudo mkdir -p /etc/docker

$ cd /etc/docker

$ echo 01 | sudo tee ca.srl


$ sudo openssl genrsa -des3 -out ca-key.pem

Enter pass phrase for ca-key.pem:

Verifying - Enter pass phrase for ca-key.pem:


$ sudo openssl req -new -days 365 -key ca-key.pem -out ca.pem

Enter pass phrase for ca-key.pem:

...

Common Name (e.g. server FQDN or Your name) []: *         (ex : www.ahnseungkyu.com)



[ Server Cert 생성 ]


1. server-key.pem => server.csr    (Common Name : e.g. server FQDN 이 중요)

$ sudo openssl genrsa -des3 -out server-key.pem

Enter pass phrase for server-key.pem:

Verifying - Enter pass phrase for server-key.pem:


$ sudo openssl req -new -key server-key.pem -out server.csr

Enter pass phrase for server-key.pem:

...

Common Name (e.g. server FQDN or Your name) []: *         (ex : www.ahnseungkyu.com)


2. ca-key.pem + ca.pem + server.csr => server-cert.pem (server.cert: 서버 cert 파일)

$ sudo openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -out server-cert.pem

Enter pass phrase for ca-key.pem:


3. server-key.pem 의 phrase 를 삭제 (server.key: 서버 private key 파일)

$ sudo openssl rsa -in server-key.pem -out server-key.pem

Enter pass phrase for server-key.pem:

writing RSA key


4. 퍼미션 수정

$ sudo chmod 600 /etc/docker/server-key.pem /etc/docker/server-cert.pem /etc/docker/ca-key.pem /etc/docker/ca.pem




[ Docker 데몬 설정 ]


Ubuntu, Debian : /etc/default/docker

RHEL, Fedora    : /etc/sysconfig/docker

systemd 버전     : /usr/lib/systemd/system/docker.service




[ systemd Docker Server 실행 ]


ExecStart=/usr/bin/docker -d -H tcp://0.0.0.0.2376 --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem


[ Docker 데몬 reload 및 재시작 필요 ]

$ sudo systemctl --system daemon-reload




[ Client Cert 생성 ]


1. client-key.pem => client.csr

$ sudo openssl genrsa -des3 -out client-key.pem

Enter pass phrase for client-key.pem:

Verifying - Enter pass phrase for client-key.pem:


sudo openssl req -new -key client-key.pem -out client.csr

Enter pass phrase for client-key.pem:

...

Common Name (e.g. server FQDN or Your name) []:



2. Client 인증 속성 추가

$ echo extendedKeyUsage = clientAuth > extfile.cnf



3. ca-key.pem + ca.pem + client.csr => client-cert.pem

$ sudo openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem -out client-cert.pem -extfile extfile.cnf

Enter pass phrase for ca-key.pem:



4. client-key 의 phrase 를 삭제

$ sudo openssl rsa -in client-key.pem -out client-key.pem

Enter pass phrase for client-key.pem:

writing RSA key




[ Docker 클라이언트에 ssl 설정 ]


$ mkdir -p ~/.docker

$ cp ca.pem ~/.docker/ca.pem

$ ca client-key.pem ~/.docker/key.pem

$ ca client-cert.pem ~/.docker/cert.pem

$ chmod 600 ~/.docker/key.pem ~/.docker/cert.pem


# docker 연결 테스트

$ sudo docker -H=docker.example.com:2376 --tlsverify info



# server

# sudo docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem \

--tlskey=server-key.pem -H=0.0.0.0:4243


# client -- note that this uses --tls instead of --tlsverify, which I had trouble with 

# docker --tls --tlscacert=ca.pem --tlscert=client-cert.pem --tlskey=client-key.pem \

-H=dns-name-of-docker-host:4243









반응형
Posted by seungkyua@gmail.com
,
반응형

https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/connecting-applications.md


사전 Docker Registry 를 만든 다음에

http://www.ahnseungkyu.com/206


1. tomcat RC 생성

$ cd Documents/registry/tomcat


$ vi tomcat8-rc.yaml


apiVersion: v1

kind: ReplicationController

metadata:

  name: tomcat8

  labels:

    name: tomcat8

spec:

  replicas: 1

  selector:

    name: tomcat8

  template:

    metadata:

      labels:

        name: tomcat8

    spec:

      containers:

      - name: tomcat8

        image: privateregistry.com:5000/tomcat-jre8:8.0.30

        ports:

        - containerPort: 8080



$ kubectl -s 192.168.230.211:8080 create -f tomcat8-rc.yaml


$ kubectl -s 192.168.230.211:8080 get rc tomcat8                    # 조회


2. tomcat service 생성

$ vi tomcat8-svc.yaml


apiVersion: v1

kind: Service

metadata:

  labels:

    name: tomcat8

  name: tomcat8

spec:

  ports:

    # the port that this service should serve on

    - port: 8088                     # Service 자신의 포트

      targetPort: 8080             # pod 내 컨테이너 포트

      nodePort: 30001

  # label keys and values that must match in order to receive traffic for this service

  selector:                            # 뒷단의 pod 와 연계

    name: tomcat8

  type: NodePort


$ kubectl create -f tomcat8-svc.yaml



[ 서비스 확인 ]

http://192.168.75.212:30001/

http://192.168.75.213:30001/



$ kubectl describe pod tomcat8-5pchl

$ kubectl get rc

$ kubectl describe rc tomcat8

$ kubectl get service

$ kubectl describe service tomcat8


$ kubectl get endpoints


$ $ kubectl get event



[ label 로 조회하기 ]

$ kubectl get service -a -l name=tomcat8

$ kubectl get pods -l name=tomcat8 -o json | grep podIP


[ 전체 조회하기 ]

$ kubectl get --all-namespaces -a service


[ container 안으로 들어가기 ]

$ kubectl exec [ pod 명 ] -c [ Container 명 ] -i -t -- bash -il

$ kubectl exec tomcat8-5pchl -c tomcat8 -i -t -- bash -il


[ Built-in 서비스 확인 ]

$ kubectl cluster-info



[ 어떻게 접근하는지 ]

$ kubectl describe svc tomcat8


Name: tomcat8

Namespace: default

Labels: name=tomcat8

Selector:         name=tomcat8

Type: NodePort

IP:         192.168.230.17                         # Service ip

Port:         <unnamed> 8088/TCP          # Service port

NodePort:         <unnamed> 30001/TCP

Endpoints:         172.16.84.4:8080                       #  Pod ip, port

Session Affinity: None

No events.


# node01 혹은 node02 에서 서비스 IP 포트로 접속 가능

curl -k http://192.168.230.17:8088


# node01 혹은 node02 에서 pod 에 직접 호출

$ kubectl get pods -o json | grep -i podip

$ curl -k http://172.16.84.4:8080




$ kubectl exec tomcat8-5pchl -- printenv | grep SERVICE

KUBERNETES_SERVICE_HOST=192.168.230.1

KUBERNETES_SERVICE_PORT=443

KUBERNETES_SERVICE_PORT_HTTPS=443


$ kubectl scale rc tomcat8 --replicas=0; kubectl scale rc tomcat8 --replicas=2


$ kubectl get pods -l name=tomcat8 -o wide

NAME            READY     STATUS    RESTARTS   AGE       NODE

tomcat8-dqvcu   1/1       Running   0          35s       192.168.75.212

tomcat8-sppk6   1/1       Running   0          35s       192.168.75.212


$ kubectl exec tomcat8-dqvcu -- printenv | grep SERVICE

KUBERNETES_SERVICE_PORT=443

TOMCAT8_SERVICE_PORT=8088

KUBERNETES_SERVICE_HOST=192.168.230.1

KUBERNETES_SERVICE_PORT_HTTPS=443

TOMCAT8_SERVICE_HOST=192.168.230.17


3. DNS 확인

$ vi curlpod.yaml


apiVersion: v1

kind: Pod

metadata:

  labels:

    name: curlpod

  name: curlpod

spec:

  containers:

  - image: radial/busyboxplus:curl

    command:

      - sleep

      - "3600"

    imagePullPolicy: IfNotPresent

    name: curlcontainer

  restartPolicy: Always



$ kubectl create -f curlpod.yaml

kubectl describe pod curlpod



[ DNS 확인 ]

$ kubectl exec curlpod -- nslookup tomcat8

kubectl exec curlpod -- curl http://tomcat8:8088



$ kubectl exec curlpod -c curlcontainer -it -- /bin/sh -il



4. 각 인스턴스 pod 에 접속

https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#accessing-services-running-on-the-cluster


$ kubectl get pods


http://192.168.75.211:8080/api/v1/proxy/namespaces/default/pods/tomcat8-dqvcu/


# docker id 조회

docker ps -l -q












반응형
Posted by seungkyua@gmail.com
,
반응형

먼저 kubernetes cluster 를 설치 해야 함

http://www.ahnseungkyu.com/200


1. Create self-signed certificate

$ cd Documents

$ mkdir registry

$ cd registry


$ mkdir -p certs && openssl req \

-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \

-x509 -days 36500 -out certs/domain.crt


Country Name (2 letter code) [AU]:  

State or Province Name (full name) [Some-State]:

Locality Name (eg, city) []:

Organization Name (eg, company) [Internet Widgits Pty Ltd]:

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:privateregistry.com

Email Address []:



2. 패스워드 파일 생성 (이건 나중에)

$ mkdir -p auth

$ docker run --entrypoint htpasswd registry:2 -Bbn test test > auth/htpasswd



3. cert 파일 복사

$ vi deployCert.sh

#!/bin/bash


FQDN=privateregistry.com


echo $FQDN


sudo mkdir -p /etc/docker/certs.d/$FQDN

sudo cp certs/domain.crt /etc/docker/certs.d/$FQDN/ca.crt


sudo mkdir -p /opt/docker_volumes/registry/$FQDN

sudo mkdir -p /opt/docker_volumes/registry/$FQDN/data

sudo cp -r certs /opt/docker_volumes/registry/$FQDN


$ ./deployCert.sh



# Ubuntu 에 Cert 설치

$ sudo cp /home/stack/Documents/registry/certs/domain.crt /usr/local/share/ca-certificates/.

$ sudo update-ca-certificates


# docker restart

$ sudo service docker restart



# node01, node02 에도 cert 복사

$ sudo mkdir -p /etc/docker/privateregistry.com

$ sudo cp /home/stack/Documents/registry/certs/domain.crt /etc/docker/privateregistry.com/ca.crt


$ sudo cp /home/stack/Documents/registry/certs/domain.crt /usr/local/share/ca-certificates/.

$ sudo update-ca-certificates


# docker restart

$ sudo service docker restart


$ sudo vi /etc/hosts

192.168.75.211  privateregistry.com


4. Registry 생성

# docker-compose 로 실행

# root 권한으로 변환해서 docker-compose 설치

$ sudo su -

# curl -L https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose


$ vi kube-registry.yml


kube-registry:

  container_name: kube-registry

  restart: always

  image: registry:2

  ports:

    - 5000:5000

  environment:

    REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt

    REGISTRY_HTTP_TLS_KEY: /certs/domain.key

    REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry

  volumes:

    - /opt/docker_volumes/registry/privateregistry.com/data:/var/lib/registry

    - /opt/docker_volumes/registry/privateregistry.com/certs:/certs


$ docker-compose -f kube-registry.yml up -d



# docker run 으로 실행

docker run -d -p 5000:5000 --restart=always --name kube-registry \

  -v `pwd`/certs:/certs \

  -v /opt/docker_volumes/registry/privateregistry.com/data:/var/lib/registry \

  -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry \

  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \

  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \

  registry:2


5. Registry 확인

https://192.168.230.211:5000/v2/_catalog


# node01 확인

$ sudo vi /etc/hosts

192.168.75.211  privateregistry.com


$ docker pull ubuntu

$ docker tag ubuntu privateregistry.com:5000/ubuntu

$ docker push privateregistry.com:5000/ubuntu


# master 에서 확인

$ docker pull privateregistry.com:5000/ubuntu


6. Registry 삭제

docker stop kube-registry && docker rm kube-registry


7. image 조회

docker images privateregistry.com:5000



8. Tomcat8 Docker file 만들기

$ mkdir -p tomcat

$ cd tomcat

$ vi Dockerfile


FROM java:8-jre


ENV CATALINA_HOME /usr/local/tomcat

ENV PATH $CATALINA_HOME/bin:$PATH

RUN mkdir -p "$CATALINA_HOME"

WORKDIR $CATALINA_HOME


# runtime dependency for Tomcat Native Libraries

RUN apt-get update && apt-get install -y libapr1 && rm -rf /var/lib/apt/lists/*


# see https://www.apache.org/dist/tomcat/tomcat-8/KEYS

RUN set -ex \

&& for key in \

05AB33110949707C93A279E3D3EFE6B686867BA6 \

07E48665A34DCAFAE522E5E6266191C37C037D42 \

47309207D818FFD8DCD3F83F1931D684307A10A5 \

541FBE7D8F78B25E055DDEE13C370389288584E7 \

61B832AC2F1C5A90F0F9B00A1C506407564C17A3 \

79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED \

9BA44C2621385CB966EBA586F72C284D731FABEE \

A27677289986DB50844682F8ACB77FC2E86E29AC \

A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 \

DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 \

F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE \

F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23 \

; do \

gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \

done


ENV TOMCAT_MAJOR 8

ENV TOMCAT_VERSION 8.5.0

ENV TOMCAT_TGZ_URL https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz


# Tomcat Native 1.2+ requires a newer version of OpenSSL than debian:jessie has available (1.0.2g+)

# see http://tomcat.10.x6.nabble.com/VOTE-Release-Apache-Tomcat-8-0-32-tp5046007p5046024.html (and following discussion)


RUN set -x \

\

&& curl -fSL "$TOMCAT_TGZ_URL" -o tomcat.tar.gz \

&& curl -fSL "$TOMCAT_TGZ_URL.asc" -o tomcat.tar.gz.asc \

&& gpg --batch --verify tomcat.tar.gz.asc tomcat.tar.gz \

&& tar -xvf tomcat.tar.gz --strip-components=1 \

&& rm bin/*.bat \

&& rm tomcat.tar.gz* \

\

&& nativeBuildDir="$(mktemp -d)" \

&& tar -xvf bin/tomcat-native.tar.gz -C "$nativeBuildDir" --strip-components=1 \

&& nativeBuildDeps=" \

gcc \

libapr1-dev \

libssl-dev \

make \

openjdk-${JAVA_VERSION%%[-~bu]*}-jdk=$JAVA_DEBIAN_VERSION \

" \

&& apt-get update && apt-get install -y --no-install-recommends $nativeBuildDeps && rm -rf /var/lib/apt/lists/* \

&& ( \

export CATALINA_HOME="$PWD" \

&& cd "$nativeBuildDir/native" \

&& [ "$(openssl version | cut -d' ' -f2)" = '1.0.1k' ] \

# http://tomcat.10.x6.nabble.com/VOTE-Release-Apache-Tomcat-8-0-32-tp5046007p5048274.html (ie, HACK HACK HACK)

&& cp src/sslcontext.c src/sslcontext.c.orig \

&& awk ' \

/^    eckey = EC_KEY_new_by_curve_name/ { print "    EC_KEY *eckey = NULL;" } \

{ print } \

' src/sslcontext.c.orig > src/sslcontext.c \

&& ./configure \

--libdir=/usr/lib/jni \

--prefix="$CATALINA_HOME" \

--with-apr=/usr/bin/apr-1-config \

--with-java-home="$(docker-java-home)" \

--with-ssl=yes \

&& make -j$(nproc) \

&& make install \

) \

&& apt-get purge -y --auto-remove $nativeBuildDeps \

&& rm -rf "$nativeBuildDir" \

&& rm bin/tomcat-native.tar.gz


# verify Tomcat Native is working properly

RUN set -e \

&& nativeLines="$(catalina.sh configtest 2>&1)" \

&& nativeLines="$(echo "$nativeLines" | grep 'Apache Tomcat Native')" \

&& nativeLines="$(echo "$nativeLines" | sort -u)" \

&& if ! echo "$nativeLines" | grep 'INFO: Loaded APR based Apache Tomcat Native library' >&2; then \

echo >&2 "$nativeLines"; \

exit 1; \

fi


EXPOSE 8080

CMD ["catalina.sh", "run"]


$ docker build -tag tomcat-jre8:8 .                # 처음 이미지는 . 을 추가할 수 없음

docker tag tomcat-jre8:8 tomcat-jre8:8.5.0         # 태그에 . 을 추가

$ docker rmi tomcat-jre8:8                          # 처음 태그를 삭제



# 태그이름을 리모트로 변경

docker tag tomcat-jre8:8.5.0 privateregistry.com:5000/tomcat-jre8:8.5.0


# 태그 이름이 리모트이므로 리모트로 올리게 됨

$ docker push privateregistry.com:5000/tomcat-jre8:8.0.30



# node01 에서 확인

$ docker pull privateregistry.com:5000/tomcat-jre8:8.0.30



$ https://192.168.230.211:5000/v2/tomcat-jre8/tags/list

$ curl https://privateregistry.com:5000/v2/tomcat-jre8/tags/list











반응형
Posted by seungkyua@gmail.com
,

haproxy 설치

Linux/Ubuntu 2016. 1. 9. 15:15
반응형

1. hpproxy install

$ sudo apt-get install haproxy


$ sudo vi /etc/haproxy/haproxy.cfg

...

defaults

log        global

mode    http

retries   3                  # 추가

option   httplog

option   dontlognull

option   redispatch      # 추가 : 한 서버가 죽으면 다른 서버로 보내라

...

...

listen serv 0.0.0.0:80        # 추가 : serv 는 아무 이름이나 줘도 됨

mode http

option http-server-close

timeout http-keep-alive 3000             # 추가 : 이미지 같은 것은 하나의 컨넥션으로 연결하기 위해

server serv 127.0.0.1:9000 check       # server1, server2 이런 식으로 서버 이름을 준다.


$ sudo service haproxy reload


















반응형
Posted by seungkyua@gmail.com
,
반응형

1. 다운로드

https://golang.org/doc/install?download=go1.5.2.darwin-amd64.tar.gz     # Mac

https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz           # Linux


$ sudo tar -C /usr/local -xzf go1.5.2.darwin-amd64.tar.gz

$ cd /usr/local

$ sudo chown -R root go


2. 환경 변수

sudo vi /etc/profile

export GOROOT=/usr/local/go                                 # go 설치 위치

export PATH=$PATH:/usr/local/go/bin                      # go 실행파일 위치


$ cd Documents

mkdir -p go_workspace{,/bin,/pkg,/src}


vi .bash_profile 

export GOPATH=$HOME/Documents/go_workspace                     # go workspace 위치

export PATH=$HOME/Documents/go_workspace/bin:$PATH         # go 실행파일 위치



## go tool 다운로드

$ got get golang.org/x/tools/cmd/...



3. go 샘플 다운로드

go get github.com/GoesToEleven/GolangTraining


# kubernetes 소스 다운로드

$ go get k8s.io/kubernetes       # 이렇게 하면 git clone https://github.com/kubernetes/kubernetes


4. Go Workspace 디렉토리 위치

- bin

- pkg

- src - github.com - GoesToEleven - GolangTraining



5. editor WebStorm 다운로드 및 세팅

https://www.jetbrains.com/webstorm/download/

버전 : WebStorm-11.0.3-custom-jdk-bundled.dmg



6. golang plugin 설치

https://plugins.jetbrains.com/plugin/5047?pr=idea

버전 : Go-0.10.749.zip


# Project Open

/Users/ahnsk/Documents/go_workspace/src/github.com/GoesToEleven/GolangTraining


# Preferences 세팅

Go SDK : /usr/local/go

Go Libraries : go_worksapce/src



7. theme 다운로드 및 설정

http://color-themes.com/?view=index

Sublime Text 2.jar 다운로드


File >> import Settings 에서 Sublime Text 2.jar 선택


# Preferences 세팅

Editor -> Colors & Fonts : Scheme을 Sublime Text2로 설정



8. JavaScript Debug 를 위한 live edit plugin 설치

https://plugins.jetbrains.com/plugin/7007?pr=pycharm

LiveEdit.jar 다운로드


# Preferences 세팅

Build, Execution, Deployment -> Debugger -> Live Edit

체크 : Highlight current....

Update Auto in (ms):   16 


# 우측 상단 돋보기 클릭하여 Edit Configuration 조회

창에서 좌측 상단 + 클릭 후 JavaScript Debug 추가


# chrom 웹 스토어에서 확장 프로그램 설치

JetBrains IDE Support



# WebStorm 단축키

파일찾기 : Command + Shift + O

단어찾기 : Command + Shift + F

실행       : Crtl + Alt + R

디버그    : Ctrl + Alt + D

줄삭제    : Command + Backspace

줄복사    : Command + D

포맷       : Command + Alt + L



# go file 규칙 테스트

$ gofmt -s -w file.go


$ git rebase -i   혹은    git push -f   로 작업의 논리적인 유닛으로 커밋



# Docker contribute 시에 DCO (Developer Certificate of Origin) 설정

# commit 마다 설정

Docker-DCO-1.1-Signed-off-by: Seungkyu Ahn <seungkyua@gmail.com> (github: seungkyua)



# 혹은 hook 를 설정

$ cd docker

$ curl -o .git/hooks/prepare-commit-msg \

https://raw.githubusercontent.com/dotcloud/docker/master/contrib/prepare-commit-msg.hook

$ chmod -x .git/hooks/prepare-commit-msg



# github user 를 세팅

$ git config -set github.user seungkyua



# Channel

# deadlock 을 막을려면 채널로 값을 보내는 쪽에서 close 채널을 해야 한다.

# 채널을 받는 쪽에서는 defer sync.WaitGroup.Done() 을 한다.

# 혹은 새로운 go 루틴을 만들고 sync.WaitGroup.Wait() 으로 끝나길 기달려서 close 채널을 한다.




# 문서 보기

## 문법 에러 검사

$ go vet wordcount.go


## tar 패키지 사용법 보기

$ go doc tar


## 로컬 문서 서버 띄우기

$ godoc -http=:6060



# godep 설치

go get github.com/tools/godep

$ cd ~/Documents/go_workspace/src/github.com/tools/godep

$ go install


## godep 을 사용하는 프로젝트로 이동

$ cd ~/Documents/go_workspace/src/k8s.io/kubernetes/


## godep get 으로 Godeps/_workspace 에 패키지를 다운한다.

## _workspace 는 deprecated 예정

godep get 패키지명


 







반응형
Posted by seungkyua@gmail.com
,

유용한 Site

프로그래밍 2015. 12. 22. 13:12
반응형

1. Kubernates

    구글 논문 : https://research.google.com/pubs/pub43438.html

    구글 발표 : https://speakerdeck.com/jbeda/containers-at-scale











슬라이드 작성

http://prezi.com



















반응형
Posted by seungkyua@gmail.com
,