'분류 전체보기'에 해당되는 글 303건
- 2017.07.18 Kubernetes and OpenStack-Helm
- 2017.03.15 OpenStack Contribution 설정
- 2016.11.24 ip link 를 활용한 device namespace 로 보내고 원복하기
- 2016.11.04 Kubernetes 와 Ceph rbd 연결하기
- 2016.10.18 T-Fabric 소개 동영상
- 2016.10.10 Kubernetes Authentication and Authorization 설정
- 2016.10.06 Kubernetes CPU 할당 방법 이해하기 1
- 2016.10.03 Kubernetes 에 kube-dns 설치하기, kubectl 사용법, glusterFS 연결
- 2016.09.28 Zookeeper 와 Kafka 설치
- 2016.09.13 OpenStack Prompt 사용
## OpenStack Foundation 사용자 등록
ip link 로 device 를 namespace 로 보내면 Host 에서 해당 device 는 볼 수 가 없다.
$ ip netns exec qrouter-68cfc511-7e75-4b85-a1ca-d8a09c489ccc ip link set eno2 netns 1
kubernetes 의 Authentication 과 Authorization 활용
[ Authentication ]
- client-ca-file 로 접속
- static password file 사용
- static token file 사용
- OpenStack Keystone 사용
static password 도 잘됨, 그러나 Authorization 을 연동해 보지는 않았음.
OpenStack Keystone 연계는 아직 알파버전이고 수정이 자주 일어나서 아직 소스까지 볼 단계는 아니라 생략.
static token 방식은 Authorization 과도 잘 연동되므로 이걸 활용
## uuid generate
$ cat /proc/sys/kernel/random/uuid
## {{uuid}} 는 위에서 제너레이션 된 값으로 대치
$ sudo vi /etc/default/kube-token
{{uuid}},admin,1
{{uuid}},ahnsk,2,"tfabric,group1"
{{uuid}},stack,3,tfabric
## api 서버에 token file 옵션 추가
$ sudo chown stack.root /etc/default/kube-token
--token-auth-file=/etc/default/kube-token \
$ sudo systemctl restart kube-apiserver.service
$ kubectl -s https://kube-master01:6443 --token={{uuid}} get node
[ Authorization ]
- ABAC Mode
- RBAC Mode
kube-system 이 kube-apiserver 에 접근하기 위해서는 1라인이 반드시 있어야 함
$ sudo vi /etc/default/kube-apiserver
--authorization-mode=ABAC \
--authorization-policy-file=/etc/default/kube-rbac.json \
$ sudo systemctl restart kube-apiserver.service
$ cd ~/kube
$ vi busybox-tfabric.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: tfabric
spec:
containers:
- image: gcr.io/google_containers/busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
$ kubectl -s https://kube-master01:6443 --token={{uuid}} --v=8 version
token 지정을 매번 하기 귀찮으니 config context 를 활용하는 것이 좋음.
이건 다음에....
## https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt
$ ssh kube-node01 "mount | grep gv1"
10.0.0.171 kafka01 zookeeper01
10.0.0.172 kafka02 zookeeper02
10.0.0.173 kafka03 zookeeper03
192.168.30.171 kafka01 zookeeper01
192.168.30.172 kafka02 zookeeper02
192.168.30.173 kafka03 zookeeper03
## kafka vm 생성
$ openstack flavor create --id ka1 --ram 8192 --disk 160 --vcpus 2 kafka
$ openstack server create --image 7498cf9d-bd2e-4401-9ae9-ca72120272ed \
--flavor ka1 --nic net-id=03a6de58-9693-4c41-9577-9307c8750141,v4-fixed-ip=10.0.0.171 \
--key-name magnum-key --security-group default kafka01
$ openstack ip floating create --floating-ip-address 192.168.30.171 public
$ openstack ip floating add 192.168.30.171 kafka01
## Oracle Java 8 설치
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
## 여러 버전을 Java 를 설치했을 때 관리
$ sudo update-alternatives --config java
## zookeeper 설치
## https://zookeeper.apache.org/doc/r3.4.9/zookeeperStarted.html
## http://apache.mirror.cdnetworks.com/zookeeper/zookeeper-3.4.9/
$ mkdir -p downloads && cd downloads
$ wget http://apache.mirror.cdnetworks.com/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
$ sudo tar -C /usr/local -xzvf zookeeper-3.4.9.tar.gz
$ cd /usr/local
$ sudo ln -s zookeeper-3.4.9/ zookeeper
$ vi /usr/local/zookeeper/conf/zoo.cfg
tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=zookeeper01:2888:3888
server.2=zookeeper02:2888:3888
server.3=zookeeper03:2888:3888
$ vi /usr/local/zookeeper/bin/zkEnv.sh
56 ZOO_LOG_DIR="/var/log/zookeeper"
$ sudo mkdir -p /var/log/zookeeper && sudo chown -R stack.stack /var/log/zookeeper
## zookeeper myid 는 서버마다 지정
$ sudo mkdir -p /var/lib/zookeeper && sudo chown -R stack.stack /var/lib/zookeeper
$ vi /var/lib/zookeeper/myid
1
$ vi ~/.bashrc
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export ZOOKEEPER_HOME=/usr/local/zookeeper
PATH=$PATH:$ZOOKEEPER_HOME/bin
$ . ~/.bashrc
$ zkServer.sh start
## zookeeper 설치 확인
$ zkCli.sh -server zookeeper01:2181
## Kafka 설치
## https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04
## https://kafka.apache.org/downloads.html
## https://kafka.apache.org/documentation.html
$ cd downloads
$ wget http://apache.mirror.cdnetworks.com/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz
$ sudo tar -C /usr/local -xzvf kafka_2.11-0.10.0.1.tgz
$ cd /usr/local && sudo chown -R stack.stack kafka_2.11-0.10.0.1
$ sudo ln -s kafka_2.11-0.10.0.1/ kafka
## broker id 는 서버마다 고유하게 줘야 함
$ vi /usr/local/kafka/config/server.properties
20 broker.id=0
56 log.dirs=/var/lib/kafka
112 zookeeper.connect=zookeeper01:2181,zookeeper02:2181,zookeeper03:2181
117 delete.topic.enable = true
$ sudo mkdir -p /var/lib/kafka && sudo chown -R stack.stack /var/lib/kafka
$ sudo mkdir -p /var/log/kafka && sudo chown -R stack.stack /var/log/kafka
$ vi ~/.bashrc
export KAFKA_HOME=/usr/local/kafka
PATH=$PATH:$KAFKA_HOME/bin
$ . ~/.bashrc
$ nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties > /var/log/kafka/kafka.log 2>&1 &
## kafkaT : kafka cluster 관리
$ sudo apt-get -y install ruby ruby-dev build-essential
$ sudo gem install kafkat --source https://rubygems.org --no-ri --no-rdoc
$ vi ~/.kafkatcfg
{
"kafka_path": "/usr/local/kafka",
"log_path": "/var/lib/kafka",
"zk_path": "zookeeper01:2181,zookeeper02:2181,zookeeper03:2181"
}
## kafka partition 보기
$ kafkat partitions
## kafka data 테스트
$ echo "Hello, World" | kafka-console-producer.sh --broker-list kafka01:9092,kafka02:9092,kafka03:9092 --topic TutorialTopic > /dev/null
$ kafka-console-consumer.sh --zookeeper zookeeper01:2181,zookeeper02:2181,zookeeper03:2181 --topic TutorialTopic --from-beginning
$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
$ kafka-topics.sh --list --zookeeper localhost:2181
$ kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message
This is another message
$ kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
## Replica 3 테스트
$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 0 Replicas: 0,2,1 Isr: 0,2,1
$ kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
my test message 1
my test message 2
^C
$ kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
## 서버 한대 다운
$ kafka-server-stop.sh
$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 0 Replicas: 0,2,1 Isr: 0,1
$ kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
## 토픽 삭제
$ kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-replicated-topic
$ kafka-topics.sh --delete --zookeeper localhost:2181 --topic TutorialTopic
## OpenStack CLI 를 사용할 때 현재 어떤 프로젝트와 사용자인지를 알려주는 Prompt 만들기
## 오픈스택 사용자를 위한 프롬프트 설정 (project:user) 로 표시됨
$ vi ~/.bashrc
openstack_user() {
env | grep -E 'OS_USERNAME|OS_PROJECT_NAME' 2> /dev/null | sed -e 's/OS_PROJECT_NAME=\(.*\)/(\1/' -e 's/OS_USERNAME=\(.*\)/\1)/' | paste -sd ":"
}
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]$(openstack_user)\$ '
$ . demo/demo-openrc
(demo:demo)$ openstack server list