'ceph'에 해당되는 글 2건

  1. 2017.10.20 kubernetes 에서 ceph rbd storageclass 활용법
  2. 2016.11.04 Kubernetes 와 Ceph rbd 연결하기
반응형


[ dynamic volume 사용법 ]
1. pvc 에 storageclass 를 지정하여 pvc 만 생성하면 pv 가 다이너믹하게 생성되고 pvc 도 생성된다.
2. 이 때 rbd 이미지도 자동으로 생성된다.

$ vi jenkins-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins
  namespace: ci-infra
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: ceph




[ static volume 사용법 ]
1. rbd 이미지를 수동으로 생성해야 한다.
2. pv 에 storageclass 와 rbd 값을 모두 넣어야 한다.
    pv 에 pvc 에서 pv 를 selector 로 찾을 수 있게 label 값을 넣어야 한다.
    (keyring 값은 안넣어도 됨, storageclass의 secret 이용)
3. pvc 에 storageclass 와 selector 나 volumeName 둘 중에 하나를 사용하여 pv 와 연결한다.
    (storageclass 는 값은 없어도 됨.  없으면 default 인 storageclass 값을 활용함)


$ vi jenkins-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins
  labels:
    app: jenkins
spec:
  capacity:
    storage: 100Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: ceph
  rbd:
    image: jenkins
    monitors:
    - 192.168.30.23:6789
    - 192.168.30.24:6789
    - 192.168.30.25:6789
    pool: kubes
    secretRef:
      name: ceph-secret-user
    user: kube


$ vi jenkins-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins
  namespace: ci-infra
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: ceph
  selector:
    matchLabels:
      app: jenkins
# volumeName: jenkins


$ vi jenkins-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
  namespace: ci-infra
  labels:
    app: jenkins
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: cicd-services
                operator: In
                values:
                - enabled
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      containers:
      - name: master
        env:
        - name: JENKINS_OPTS
          value: "--httpsPort=0 --http2Port=0"
        - name: JAVA_OPTS
          value: "-Xms8G -Xmx8G -XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1 -Dorg.apache.commons.jelly.tags.fmt.timeZone=Asia/Seoul"
        image: jenkins/jenkins:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 50000
          name: jnlp
          protocol: TCP
        readinessProbe:
          httpGet:
            path: /login
            port: 8080
          periodSeconds: 10
          timeoutSeconds: 5
          successThreshold: 2
          failureThreshold: 5
        volumeMounts:
        - mountPath: /var/jenkins_home
          name: jenkins
#        resources:
#          limits:
#            cpu: 4000m
#            memory: 8000Mi
#          requests:
#            cpu: 1000m
#            memory: 8000Mi
      volumes:
      - name: jenkins
        persistentVolumeClaim:
          claimName: jenkins




## 생성
$ rbd create kubes/jenkins -s 100G
$ kubectl create -f jenkins-pv.yaml
$ kubectl create -f jenkins-pvc.yaml

$ kubectl create -f jenkins-deployment.yaml 


반응형
Posted by seungkyua@gmail.com
,
반응형
## https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/cephfs

[ ceph-admin 노드에서 ]
$ ssh ceph@192.168.30.22

## kubes Pool 생성
$ ceph osd pool create kubes 128

## kube user 생성
$ ceph auth get-or-create client.kube mon 'allow r' \
osd 'allow class-read object_prefix rbd_children, allow rwx pool=kubes'


[client.kube]
    key = AQCt/BpYigJ7MRAA5vy+cl39EsKpY3C+tXEGrA==

## kube user 에 대한 secret key 생성 및 조회
$ ceph auth get-or-create client.kube
AQCt/BpYigJ7MRAA5vy+cl39EsKpY3C+tXEGrA==


## kube-node01, kube-node02 서버에 kube key 와 ceph.conf 추가
$ ssh stack@192.168.30.15 sudo mkdir -p /etc/ceph
$ ceph auth get-or-create client.kube | ssh stack@192.168.30.15 sudo tee /etc/ceph/ceph.client.kube.keyring
$ cat /etc/ceph/ceph.conf | ssh stack@192.168.30.15 sudo tee /etc/ceph/ceph.conf
$ ssh stack@192.168.30.15 sudo chown -R stack.stack /etc/ceph

$ ssh stack@192.168.30.16 sudo mkdir -p /etc/ceph
$ ceph auth get-or-create client.kube | ssh stack@192.168.30.16 sudo tee /etc/ceph/ceph.client.kube.keyring
$ cat /etc/ceph/ceph.conf | ssh stack@192.168.30.16 sudo tee /etc/ceph/ceph.conf
$ ssh stack@192.168.30.16 sudo chown -R stack.stack /etc/ceph



[ kube-node01, kube-node02 에 접속 ]
## ceph rbd client (ceph-common) 와 ceph fs client 설치 (ceph-fs-common)
$ sudo apt-get -y install ceph-common ceph-fs-common



########################################
## ceph rbd 로 연결하는 방식
########################################

## https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/rbd
## https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes

[ ceph-admin 노드에서 ]

## kube keyring 파일 넣기
$ sudo vi /etc/ceph/ceph.client.kube.keyring
[client.kube]
    key = AQCt/BpYigJ7MRAA5vy+cl39EsKpY3C+tXEGrA==


## rbd 이미지 생성
## http://karan-mj.blogspot.kr/2013/12/ceph-installation-part-3.html

$ rbd create ceph-rbd-test --pool kubes --name client.kube --size 1G -k /etc/ceph/ceph.client.kube.keyring

$ rbd list --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd -p kubes ls


## Jewel 의 새기능은 현재 대부분의 OS 에서 mount 문제가 있어 image 기능을 제거 해야 함
$ rbd feature disable ceph-rbd-test fast-diff --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd feature disable ceph-rbd-test deep-flatten --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd feature disable ceph-rbd-test object-map --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd feature disable ceph-rbd-test exclusive-lock --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring

$ rbd info ceph-rbd-test --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring
$ rbd --image ceph-rbd-test -p kubes info

$ rbd remove ceph-rbd-test --pool kubes --name client.kube -k /etc/ceph/ceph.client.kube.keyring


## secret yaml 을 만들기 위해 key 를 base64 로 인코딩 함
$ grep key /etc/ceph/ceph.client.kube.keyring |awk '{printf "%s", $NF}'|base64
QVFDdC9CcFlpZ0o3TVJBQTV2eStjbDM5RXNLcFkzQyt0WEVHckE9PQ==




[ kube-deploy 접속 ]

## secret key 를 pod 로 생성하여 접속
$ vi ~/kube/ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFDdC9CcFlpZ0o3TVJBQTV2eStjbDM5RXNLcFkzQyt0WEVHckE9PQ==

$ scp ~/kube/ceph-secret.yaml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/ceph-secret.yaml"
$ kubectl -s http://kube-master01:8080 get secrets


## rbd-with-secret pod 생성해서 rbd 활용
$ vi ~/kube/rbd-with-secret.yml
apiVersion: v1
kind: Pod
metadata:
  name: rbd2
spec:
  containers:
  - image: gcr.io/google_containers/busybox
    command:
    - sleep
    - "3600"
    imagePullPolicy: IfNotPresent
    name: rbd-rw-busybox
    volumeMounts:
    - mountPath: "/mnt/rbd"
      name: rbdpd
  volumes:
  - name: rbdpd
    rbd:
      monitors:
      - 192.168.30.23:6789
      - 192.168.30.24:6789
      - 192.168.30.25:6789
      pool: kubes
      image: ceph-rbd-test
      user: kube
      keyring: /etc/ceph/ceph.client.kube.keyring
      secretRef:
        name: ceph-secret
      fsType: ext4
      readOnly: false


$ scp ~/kube/rbd-with-secret.yml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/rbd-with-secret.yml"
$ kubectl -s http://kube-master01:8080 get pods




## rbd 연결 확인
$ kubectl -s http://kube-master01:8080 describe pods rbd2
$ kubectl -s http://kube-master01:8080 exec -it rbd2 -- df -h



[ kube-node02 접속하여 ]

$ docker ps
$ docker inspect --format '{{ .Mounts }}' 4c4070a1393b

## 혹은
$ mount |grep kub
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/kubes-image-ceph-rbd-test type ext4 (rw,relatime,stripe=1024,data=ordered)
/dev/rbd0 on /var/lib/kubelet/pods/061973fc-a265-11e6-940f-5cb9018c67dc/volumes/kubernetes.io~rbd/rbdpd type ext4 (rw,relatime,stripe=1024,data=ordered)




[ kube-deploy 접속해서 ]

## secret key pod 를 사용하지 않고 keyring 으로만 rbd pod 생성
$ vi ~/kube/rbd.yml
apiVersion: v1
kind: Pod
metadata:
  name: rbd
spec:
  containers:
  - image: gcr.io/google_containers/busybox
    command:
    - sleep
    - "3600"
    imagePullPolicy: IfNotPresent
    name: rbd-rw-busybox
    volumeMounts:
    - mountPath: "/mnt/rbd"
      name: rbdpd
  volumes:
  - name: rbdpd
    rbd:
      monitors:
      - 192.168.30.23:6789
      - 192.168.30.24:6789
      - 192.168.30.25:6789
      pool: kubes
      image: ceph-rbd-test
      user: kube
      keyring: /etc/ceph/ceph.client.kube.keyring
      fsType: ext4
      readOnly: false


$ scp ~/kube/rbd.yml kube-master01:~/kube/.
$ ssh kube-master01 "kubectl create -f ~/kube/rbd.yml"
$ kubectl -s http://kube-master01:8080 get pods

## rbd 연결 확인

$ kubectl -s http://kube-master01:8080 exec -it rbd -- df -h 


반응형
Posted by seungkyua@gmail.com
,