반응형

Kubernetes 교육 과정(안)을 적어본 김에 OpenStack 과정도 생각해 봤다.

 

[ OpenStack 교육 과정 ]

[ 1 day ]
OpenStack 소개 (2H)
 - OpenStack 배경, 구성요소, 아키텍처, 기능 등을 설명
OpenStack 설치 실습 (4H)
 - OpenStack 기본 설정 설치 (cinder 제외, tenant network 로 설정)
OpenStack Cli 실습 (2H)
 - OpenStack Cli 활용 실습

[ 2 day ]
OpenStack Identity 관리 (1H)
 - Keystone 설정, 기능 설명과 명령어를 실습
OpenStack Image 관리 (4H)
 - Glance 설정, 기능 설명
 - Diskimage-builder 를 활용한 이미지 생성, cloud-init 활용
 - 명령어 실습
Ceph 스토리지 관리 (4H)
 - Ceph 스토리지 기본 설명 및 설치 실습
 - Cinder 설치, 설정, 기능 설명
 - 명령어 실습

[ 3 day ]
OpenStack 네트워크 관리 (8H)
 - Neutron 기능 설명
 - provider network 설정, 기능 설명
 - tenant network 설정, 기능 설명
 - OVN (Open Vistual Network) 설명

[ 4 day ]
OpenStack Compute 관리 (4H)
 - Nova Compute 설정, 기능 설명
 - Live migration, evacuation, 노드 활성화,비활성화 실습
OpenStack 종합 실습 (4H)
 - 로그를 통한 문제 해결 방법
 - Custum image 생성, 등록, VM 인스턴스 생성 등의 모든 기능을 실습

 

 

반응형
Posted by seungkyua@gmail.com
,

Helm chart

OpenStack 2018. 3. 23. 10:33
반응형



# ingress.yaml 


images:

      tags:

        entrypoint: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        ingress: registry.cicd.stg.taco/nginx-ingress-controller:0.9.0

        error_pages: registry.cicd.stg.taco/defaultbackend:1.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

      pull_policy: Always

    config:

      worker-processes: "8"

    pod:

      replicas:

        ingress: 1

        error_page: 1




# openstack-ceph-config.yaml


images:

      tags:

        ks_user: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_service: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_endpoints: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ceph_bootstrap: registry.cicd.stg.taco/ceph-daemon:tag-build-master-jewel-ubuntu-16.04

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        ceph_daemon: registry.cicd.stg.taco/ceph-daemon:tag-build-master-jewel-ubuntu-16.04

        ceph_config_helper: registry.cicd.stg.taco/ceph-config-helper:v1.7.5

        ceph_rbd_provisioner: registry.cicd.stg.taco/rbd-provisioner:v0.1.1

        ceph_cephfs_provisioner: registry.cicd.stg.taco/cephfs-provisioner:v0.1.1

      pull_policy: IfNotPresent 

    deployment:

      storage_secrets: true

      client_secrets: true

      rbd_provisioner: false

      cephfs_provisioner: false

      rgw_keystone_user_and_endpoints: false

    conf:

      ceph:

        global:

          mon_host: 192.168.51.20

    storageclass:

      rbd:

        provision_storage_class: false

        user_id: cinder

        admin_secret_namespace: openstack

      cephfs:

        provision_storage_class: false

        dmin_secret_namespace: openstack

    manifests:

      configmap_bin_clients: true

      configmap_bin_ks: true

      configmap_bin: true

      configmap_etc: true

      configmap_templates: true

      daemonset_mon: false

      daemonset_osd: false

      deployment_mds: false

      deployment_moncheck: false

      deployment_rbd_provisioner: false

      deployment_cephfs_provisioner: false

      deployment_rgw: false

      deployment_mgr: false

      job_bootstrap: false

      job_cephfs_client_key: false

      job_keyring: false

      job_ks_endpoints: false

      job_ks_service: false

      job_ks_user: false

      job_namespace_client_key_cleaner: true

      job_namespace_client_key: true

      job_rbd_pool: false

      job_storage_admin_keys: true

      secret_keystone_rgw: false

      secret_keystone: false

      service_mgr: false

      service_mon: false

      service_rgw: false

      service_mon_discovery: false

      storageclass: false

    dependencies:

      rbd_provisioner:

        jobs:

        services:




# mariadb.yaml


images:

      tags:

        mariadb: registry.cicd.stg.taco/mariadb:10.1.23

        test: registry.cicd.stg.taco/ocata/ubuntu-source-kolla-toolbox:develop

      pull_policy: Always

    pod:

      replicas:

        server: 3

    volume:

      enabled: true

      class_name: ceph



# etcd.yaml


images:

      tags:

        etcd: registry.cicd.stg.taco/etcd:v3.2.5 

        test: registry.cicd.stg.taco/ocata/ubuntu-source-kolla-toolbox:develop

      pull_policy: IfNotPresent

    pod:

      replicas:

        etcd: 1



# rabbitmq.yaml


images:

      tags:

        rabbitmq: registry.cicd.stg.taco/rabbitmq:3.7

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        test: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

      pull_policy: IfNotPresent

    pod:

      replicas:

        server: 3

    volume:

      class_name: ceph




# memcached.yaml


images:

      tags:

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        memcached: registry.cicd.stg.taco/memcached:1.5.5 

      pull_policy: IfNotPresent

    pod:

      replicas:

        server: 1





# libvirt.yaml


images:

      tags:

        libvirt: registry.cicd.stg.taco/ocata/ubuntu-source-nova-libvirt:2.2.0

      pull_policy: Always

    ceph:

      enabled: true

      cinder_user: "cinder"

      cinder_keyring: "xxxxx=="

    libvirt:

      listen_addr: 0.0.0.0

      log_level: 3

    manifests:

      configmap_bin: true

      configmap_etc: true

      daemonset_libvirt: true




# openvswitch.yaml


images:

      tags:

        openvswitch_db_server: registry.cicd.stg.taco/ocata/ubuntu-source-openvswitch-db-server:2.2.0

        openvswitch_vswitchd: registry.cicd.stg.taco/ocata/ubuntu-source-openvswitch-vswitchd:2.2.0

      pull_policy: Always

    network:

      external_bridge: br-ex

      interface:

        external: bond1.52

      auto_bridge_add: {}




# keystone.yaml


images:

      tags:

        bootstrap: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        test: registry.cicd.stg.taco/ocata/ubuntu-source-rally:2.2.0

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        keystone_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

        db_drop: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        keystone_fernet_setup: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

        keystone_fernet_rotate: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

        keystone_credential_setup: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

        keystone_credential_rotate: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

        keystone_api: registry.cicd.stg.taco/ocata/ubuntu-source-keystone:2.2.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        rabbit_init: registry.cicd.stg.taco/rabbitmq:3.7-management

      pull_policy: Always

    conf:

      keystone:

        DEFAULT:

          debug: true

    pod:

      replicas:

        api: 3




# glance.yaml


storage: rbd

    images:

      tags:

        test: registry.cicd.stg.taco/ocata/ubuntu-source-rally:2.2.0

        glance_storage_init: registry.cicd.stg.taco/ceph-daemon:tag-build-master-jewel-ubuntu-16.04

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        glance_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-glance-api:2.2.0

        db_drop: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_user: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_service: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_endpoints: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        glance_api: registry.cicd.stg.taco/ocata/ubuntu-source-glance-api:2.2.0

        glance_registry: registry.cicd.stg.taco/ocata/ubuntu-source-glance-registry:2.2.0

        bootstrap: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        rabbit_init: registry.cicd.stg.taco/rabbitmq:3.7-management

      pull_policy: Always

    pod:

      replicas:

        api: 3

        registry: 3

      user:

        glance:

          uid: 42415

    network:

      api:

        ingress:

          proxy_body_size: 102400M

    conf:

      ceph:

        monitors: ["192.168.51.20"]

        admin_keyring: "xxxx=="

      glance:

        glance_store:

          rbd_store_user: glance

          rbd_store_pool: images

        DEFAULT:

          show_image_direct_url: true

    bootstrap:

      enabled: true

      images:

        cirros:

          id: 201084fc-c276-4744-8504-cb974dbb3610

          private: false




# nova.yaml


images:

      tags:

        bootstrap: registry.cicd.stg.taco/ocata/ubuntu-source-nova-api:2.2.0

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-nova-api:2.2.0

        db_drop: registry.cicd.stg.taco/ocata/ubuntu-source-nova-api:2.2.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        rabbit_init: registry.cicd.stg.taco/rabbitmq:3.7-management

        ks_user: registry.cicd.stg.taco/ocata/ubuntu-source-kolla-toolbox:2.2.0

        ks_service: registry.cicd.stg.taco/ocata/ubuntu-source-kolla-toolbox:2.2.0

        ks_endpoints: registry.cicd.stg.taco/ocata/ubuntu-source-kolla-toolbox:2.2.0

        nova_api: registry.cicd.stg.taco/ocata/ubuntu-source-nova-api:2.2.0

        nova_cell_setup: registry.cicd.stg.taco/ocata/ubuntu-source-nova-api:2.2.0

        nova_compute: registry.cicd.stg.taco/ocata/ubuntu-source-nova-compute:2.2.0

        nova_compute_ironic: registry.cicd.stg.taco/ocata/ubuntu-source-nova-compute-ironic:2.2.0

        nova_compute_ssh: registry.cicd.stg.taco/ocata/ubuntu-source-nova-ssh:2.2.0

        nova_conductor: registry.cicd.stg.taco/ocata/ubuntu-source-nova-conductor:2.2.0

        nova_consoleauth: registry.cicd.stg.taco/ocata/ubuntu-source-nova-consoleauth:2.2.0

        nova_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-nova-api:2.2.0

        nova_novncproxy: registry.cicd.stg.taco/ocata/ubuntu-source-nova-novncproxy:2.2.0

        nova_novncproxy_assets: registry.cicd.stg.taco/ocata/ubuntu-source-nova-novncproxy:2.2.0

        nova_placement: registry.cicd.stg.taco/ocata/ubuntu-source-nova-placement-api:2.2.0

        nova_scheduler: registry.cicd.stg.taco/ocata/ubuntu-source-nova-scheduler:2.2.0

        nova_spiceproxy: registry.cicd.stg.taco/ocata/ubuntu-source-nova-spicehtml5proxy:2.2.0

        nova_spiceproxy_assets: registry.cicd.stg.taco/ocata/ubuntu-source-nova-spicehtml5proxy:2.2.0

        test: registry.cicd.stg.taco/ocata/ubuntu-source-rally:2.2.0

      pull_policy: Always

    bootstrap:

      enabled: true

      flavors:

        m1_tiny:

          id: 0c84e220-a258-439f-a6ff-f8e9fd980025

    network:

      novncproxy:

        name: "nova-novncproxy"

        node_port:

          enabled: true

          port: 30608

        port: 6080

        targetPort: 6080

    ceph:

      enabled: true

      cinder_user: "cinder"

      cinder_keyring: "xxxx=="

      secret_uuid: "582393ff-9a5c-4a2e-ae0d-86ec18c36afc"

    conf:

      nova:

        DEFAULT:

          force_config_drive: true

          scheduler_default_filters: "RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter"

          debug: true

        vnc:

          novncproxy_base_url: http://ctrl01-stg:30608/vnc_auto.html

        libvirt:

          rbd_user: "cinder"

          rbd_secret_uuid: "582393ff-9a5c-4a2e-ae0d-86ec18c36afc"

        scheduler:

          discover_hosts_in_cells_interval: 60

    endpoints:

      oslo_db_cell0:

        path: /nova_cell0

    pod:

      user:

        nova:

          uid: 42436

      replicas:

        api_metadata: 3

        osapi: 3

        conductor: 3

        consoleauth: 3

        scheduler: 3

        novncproxy: 3




# neutron.yaml


images:

      tags:

        bootstrap: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        neutron_test: registry.cicd.stg.taco/ocata/ubuntu-source-rally:2.2.0

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        neutron_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-server:2.2.0

        db_drop: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_user: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_service: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_endpoints: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        neutron_server: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-server:2.2.0

        neutron_dhcp: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-dhcp-agent:2.2.0

        neutron_metadata: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-metadata-agent:2.2.0

        neutron_l3: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-l3-agent:2.2.0

        neutron_openvswitch_agent: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-openvswitch-agent:2.2.0

        neutron_linuxbridge_agent: registry.cicd.stg.taco/ocata/ubuntu-source-neutron-linuxbridge-agent:2.2.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        rabbit_init: registry.cicd.stg.taco/rabbitmq:3.7-management

      pull_policy: Always

    pod:

      replicas:

        server: 3

      user:

        neutron:

          uid: 42435

    labels:

      agent:

        dhcp:

          node_selector_key: openstack-network-node

        l3:

          node_selector_key: openstack-network-node

    manifests:

      daemonset_metadata_agent: false

      daemonset_ovs_agent: true

      daemonset_lb_agent: false

    network:

      backend: ovs

      external_bridge: br-ex

      interface:

        tunnel: bond1

    conf:

      neutron_sudoers:

        override: |

          # This sudoers file supports rootwrap-daemon for both Kolla and LOCI Images.

          Defaults !requiretty

          Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/openstack/bin:/var/lib/kolla/venv/bin"

          neutron ALL = (root) NOPASSWD: /var/lib/kolla/venv/bin/neutron-rootwrap /etc/neutron/rootwrap.conf *, /var/lib/openstack/bin/neutron-rootwrap /etc/neutron/rootwrap.conf *, /var/lib/kolla/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf, /var/lib/openstack/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

      neutron:

        DEFAULT:

          debug: True

          core_plugin: ml2

          l3_ha: True

          global_physnet_mtu: 9000

          service_plugins: router

          interface_driver: openvswitch

        agent:

          root_helper_daemon: sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

      plugins:

        ml2_conf:

          ml2:

            mechanism_drivers: openvswitch,l2population

            type_drivers: flat, vlan, vxlan

            tenant_network_types: vxlan

        openvswitch_agent:

          agent:

            tunnel_types: vxlan

            l2_population: True

            arp_responder: True

          ovs:

            bridge_mappings: "external:br-ex"

          securitygroup:

            firewall_driver: openvswitch






# cinder.yaml


images:

      tags:

        test: registry.cicd.stg.taco/ocata/ubuntu-source-rally:2.2.0

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        cinder_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-cinder-api:2.2.0

        db_drop: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_user: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_service: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_endpoints: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        cinder_api: registry.cicd.stg.taco/ocata/ubuntu-source-cinder-api:2.2.0

        bootstrap: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        cinder_scheduler: registry.cicd.stg.taco/ocata/ubuntu-source-cinder-scheduler:2.2.0

        cinder_volume: registry.cicd.stg.taco/ocata/ubuntu-source-cinder-volume:2.2.0

        cinder_volume_usage_audit: registry.cicd.stg.taco/ocata/ubuntu-source-cinder-volume:2.2.0

        cinder_storage_init: registry.cicd.stg.taco/ceph-daemon:tag-build-master-jewel-ubuntu-16.04

        cinder_backup: registry.cicd.stg.taco/ocata/ubuntu-source-cinder-backup:2.2.0

        cinder_backup_storage_init: registry.cicd.stg.taco/ceph-daemon:tag-build-master-jewel-ubuntu-16.04

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        rabbit_init: registry.cicd.stg.taco/rabbitmq:3.7-management

      pull_policy: Always

    pod:

      user:

        cinder:

          uid: 42407

      replicas:

        api: 3

        backup: 1

        scheduler: 3

        volume: 1

    conf:

      ceph:

        admin_keyring: "xxxxx=="

        monitors: ["192.168.51.20"]

      cinder:

        DEFAULT:

          debug: true

          backup_ceph_user: "cinder"

          backup_ceph_pool: "backups"

      backends:

        rbd1:

          volume_driver: cinder.volume.drivers.rbd.RBDDriver

          volume_backend_name: rbd1

          rbd_ceph_conf: "/etc/ceph/ceph.conf"

          rbd_flatten_volume_from_snapshot: false

          rbd_max_clone_depth: 5

          rbd_store_chunk_size: 4

          rados_connect_timeout: -1

          rbd_user: "cinder"

          rbd_pool: "volumes"





# heat.yaml


images:

      tags:

        bootstrap: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        heat_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-heat-api:2.2.0

        db_drop: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_user: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_service: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        ks_endpoints: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        heat_api: registry.cicd.stg.taco/ocata/ubuntu-source-heat-api:2.2.0

        heat_cfn: registry.cicd.stg.taco/ocata/ubuntu-source-heat-api:2.2.0

        heat_cloudwatch: registry.cicd.stg.taco/ocata/ubuntu-source-heat-api:2.2.0

        heat_engine: registry.cicd.stg.taco/ocata/ubuntu-source-heat-engine:2.2.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        rabbit_init: registry.cicd.stg.taco/rabbitmq:3.7-management

      pull_policy: Always

    pod:

      user:

        heat:

          uid: 42418

      replicas:

        api: 3

        cfn: 3

        cloudwatch: 3

        engine: 3



# horizon.yaml


images:

      tags:

        db_init: registry.cicd.stg.taco/ocata/ubuntu-source-horizon:2.2.0

        horizon_db_sync: registry.cicd.stg.taco/ocata/ubuntu-source-horizon:2.2.0

        horizon: registry.cicd.stg.taco/ocata/ubuntu-source-horizon:2.2.0

        dep_check: registry.cicd.stg.taco/kubernetes-entrypoint:v0.2.1

        test: registry.cicd.stg.taco/ocata/ubuntu-source-horizon:develop

      pull_policy: Always

    pod:

      replicas:

        server: 3

    network:

      external_policy_local: false

      node_port:

        enabled: true

        port: 32000

    local_settings:

      openstack_neutron_network:

        enable_router: "True"

        enable_quotas: "True"

        enable_ipv6: "False"

        enable_distributed_router: "False"

        enable_ha_router: "True"

        enable_lb: "True"

        enable_firewall: "False"

        enable_vpn: "False"

        enable_fip_topology_check: "True"




반응형
Posted by seungkyua@gmail.com
,
반응형

OpenStack 발표자료 (From Kubernetes to OpenStack)





반응형
Posted by seungkyua@gmail.com
,
반응형

OpenStack Day Korea 2017 에서 발표한 자료




반응형
Posted by seungkyua@gmail.com
,
반응형

## OpenStack Foundation 사용자 등록



## launchpad 에 사용자 등록 (OpenStack Foundation email 과 동일해야 함)

## launchpad.net 사용자 id 확인 (자신의 id 로 조회되는지 확인)
https://launchpad.net/~seungkyua


## review 사이트에 사용자 등록


## review 사이트에서 필요한 정보 등록
1. Profile 메뉴에서 Username 등록
2. Contact Information 에서 아래 처럼 날짜 업데이트 되었는지 확인 (안되어 있으면 정보 입력)
   Contact information last updated on May 25, 2015 at 12:51 PM.
3. SSH Public Keys 등록
   $ cat ~/.ssh/id_rsa.pub
4. Agreements 서명





[ stackalytics 에 추가 ]
$ mkdir -p ~/Documents/git && cd ~/Documents/git
$ git clone ssh://seungkyu@review.openstack.org:29418/openstack/stackalytics
$ cd stackalytics


## git 및 git-review 설치
$ brew install git git-review


## 환경 설정 (gitreview.username 은 review 사이트의 Profile Username 임)
$ git config --add gitreview.username "seungkyu"
git config --add user.name "Seungkyu Ahn"
git config --add user.email "seungkyua@gmail.com"




## 접속 테스트 및 commit-msg hook 다운로드
$ git review -s




## 개인 추가 (launchpad_id 의 abc 순), end_date: null 은 하나 밖에 못씀
## launchpad_id 만 필수, 나머지 id 는 옵션
$ git checkout -b seungkyua
$ vi etc/default_data.json
        {
            "launchpad_id": "seungkyua",
            "gerrit_id": "seungkyu",
            "github_id": "seungkyua",
            "companies": [
                {
                    "company_name": "Samsung SDS",
                    "end_date": "2015-Feb-28"
                },
                {
                    "company_name": "OpenStack Korea User Group",
                    "end_date": "2016-Dec-31"
                },
                {
                    "company_name": "SK telecom",
                    "end_date": null
                }
            ],
            "user_name": "Seungkyu Ahn",
            "emails": ["ahnsk@sk.com", "seungkyua@gmail.com"]
        },




## companies 항목에 회사명이 없을 때는 추가해야 함
25785         {
25786             "domains": ["sktelecom.com"],
25787             "company_name": "SK telecom",
25788             "aliases": ["SKT", "SKTelecom"]
25789         },



$ git commit -a

## commit message 는 아래와 같이
modify personal info about seungkyua


## commit message 작성법
첫번째 라인은 50자 이내로 간단히 요약을 쓴다.
[공백라인]
설명을 적되 라인은 72자가 넘어가면 다음 라인에 쓴다.



## review 올리기
$ git review



## git review 시 Change-Id 세팅 에러가 나면 화면 에러 대로 수행
$ gitdir=$(git rev-parse --git-dir); scp -p -P 29418 seungkyu@review.openstack.org:hooks/commit-msg ${gitdir}/hooks/

$ git commit --amend
$ git review


## 확인










반응형
Posted by seungkyua@gmail.com
,

OpenStack Prompt 사용

OpenStack 2016. 9. 13. 23:12
반응형

## OpenStack CLI 를 사용할 때 현재 어떤 프로젝트와 사용자인지를 알려주는 Prompt 만들기


## 오픈스택 사용자를 위한 프롬프트 설정  (project:user) 로 표시됨

$ vi ~/.bashrc


openstack_user() {

  env | grep -E 'OS_USERNAME|OS_PROJECT_NAME' 2> /dev/null | sed -e 's/OS_PROJECT_NAME=\(.*\)/(\1/' -e 's/OS_USERNAME=\(.*\)/\1)/' | paste -sd ":"

}


PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]$(openstack_user)\$ '





$ . demo/demo-openrc

(demo:demo)$ openstack server list






반응형
Posted by seungkyua@gmail.com
,
반응형
## root 계정으로 수행해야 함


# apt-get update
# apt-get install -y gcc make
# apt-get install -y python-pip python-dev python3-dev libevent-dev \
                            vlan libvirt-bin bridge-utils lvm2 openvswitch-switch \
                            python-libvirt nbd-client ipset ntp python-lzma \
                            p7zip-full arping qemu-kvm

# apt-get install -y python-tox libmysqlclient-dev libpq-dev \
                           libxml2-dev libxslt1-dev libvirt-dev libffi-dev

# apt-get install -y virtinst libsemanage1-dev python-semanage \
                            attr policycoreutils


## avocado 설치
# cd ~
# mkdir avocado && cd avocado
# git clone git://github.com/avocado-framework/avocado.git
# cd avocado
# make requirements
# python setup.py install


##  avocado plugin 설치 (avocado-vt)
# cd ~/avocado
# cd avocado
# make requirements-plugins
# make link


# vi ~/.config/avocado/avocado.conf
[datadir.paths]
base_dir = /root/avocado/avocado
test_dir = /root/avocado/avocado/examples/tests
data_dir = /usr/share/avocado/data
logs_dir = /root/avocado/avocado/job-results



## Bootstrapping Avocado-VT (vt-type : qemu, libvirt .....)
# ./scripts/avocado vt-bootstrap --vt-type libvirt



## Avocado plugins list 보기
# ./scripts/avocado plugins


## vt-type 별 test list 보기 (vt-type : qemu, libvirt .....)
# ./scripts/avocado list --vt-type libvirt --verbose


## libvirt test case 한개 돌리기
# ./scripts/avocado run type_specific.io-github-autotest-qemu.driver_load.with_balloon


## 결과 보기
# cd /root/avocado/avocado/job-results/job-2016-08-31T09.17-1daa785/\
html/results.html


## 전체 테스트 돌리기
# ./scripts/avocado run type_specific










반응형
Posted by seungkyua@gmail.com
,

localconf

OpenStack 2016. 5. 27. 10:59
반응형

[[local|localrc]]

ADMIN_PASSWORD=secret

DATABASE_PASSWORD=$ADMIN_PASSWORD

RABBIT_PASSWORD=$ADMIN_PASSWORD

SERVICE_PASSWORD=$ADMIN_PASSWORD

HOST_IP=10.40.102.84 // VM IP로 변경

# Do not use Nova-Network

disable_service n-net

# Enable Neutron

ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3

## Neutron options

Q_USE_SECGROUP=True

FLOATING_RANGE="10.40.102.0/24"

FIXED_RANGE="10.0.0.0/24"

Q_FLOATING_ALLOCATION_POOL=start=10.40.102.250,end=10.40.102.254

PUBLIC_NETWORK_GATEWAY="10.40.102.1"

PUBLIC_INTERFACE=eth0

# Open vSwitch provider networking configuration

Q_USE_PROVIDERNET_FOR_PUBLIC=True

OVS_PHYSICAL_BRIDGE=br-ex

PUBLIC_BRIDGE=br-ex

OVS_BRIDGE_MAPPINGS=public:br-ex

# Disable Identity v2

ENABLE_IDENTITY_V2=False

반응형
Posted by seungkyua@gmail.com
,
반응형

1. 로그 로테이트 설정

    - 로그 파일이 쌓이는 것을 막아줌


2. Availability Zone 과 Aggregate Host 설정

    - VM 을 효율적으로 스케줄링 할 수 있음.


3. cpu, memory, disk ratio 설정

    - overcommit 을 고려


4. Nova Compute 에서 inject password 나 inject file 을 false 로 설정 

    - VM 부팅 속도를 빠르게 함


5. Cinder QoS, Network QoS 설정

    - 스토리지와 Network 의 QoS 설정으로 간섭을 최소화 함


6. Neutron Network 설정 정보

    - Provider Network 를 사용해야 tunneling 이 없어 속도가 빠름


7. live migration 설정

    - maxdowntime 을 적절히 설정해야 함


8. 캐시가 안되어 있는 새로운 이미지로 여러 VM 동시 생성 속도 측정

    - 이미지를 가져오는 이슈로 네트워크 대역폭을 다 소모할 수 있음

    - 사전에 이미지가 캐시되게 모든 host 에 해당 vm 을 미리 생성


9. VM 인스턴스 데이터가 저장되는 /var/lib/nova 의 디스크 사이즈가 충분한지 검증



To be continue ...











반응형
Posted by seungkyua@gmail.com
,
반응형

[ Server IP 정보 ]

eth0 : NAT type         (vmnet2)  192.168.75.138        Public Network

eth1 : Host-only type (vmnet3)  192.168.230.138      Private Network

[ Multi Node 의 경우 두번째 추가 Compute Node ]
eth0 : NAT type         (vmnet2)  192.168.75.139       Public Network
eth1 : Host-only type (vmnet3)  192.168.230.139      Private Network

[ User 선택 ]
stack 유저로 생성

[ visudo 세팅 ]
stack   ALL=(ALL:ALL) NOPASSWD:ALL

[ vi /etc/network/interfaces ]
auto lo
iface lo inet loopback

auto ens33
iface ens33 inet static
        address 192.168.75.138
        netmask 255.255.255.0
        gateway 192.168.75.2
        dns-nameservers 8.8.8.8 8.8.4.4

auto ens34
iface ens34 inet static
        address 192.168.230.138
        netmask 255.255.255.0


[ Host 변경 ]
mkdir -p ~/Documents/scripts
cd ~/Documents/scripts

vi servers.txt
192.168.230.138 devstack01
192.168.230.139 devstack02

vi 01-hosts-setup.sh
#!/bin/bash

SERVERLIST=$HOME/Documents/scripts/servers.txt
MASTER_IP="192.168.230.138"
MASTER_HOSTNAME="devstack01"
SSH_USER="stack"

function set_sshkey() {
    local server=$1
    if [[ $server == "$MASTER_IP" ]]; then
        if [[ ! -f "${HOME}/.ssh/id_rsa" ]]; then
            yes "" | ssh-keygen -t rsa -N ""
        else
            echo "skip to create ssh-keygen"
        fi
    fi
    cat ~/.ssh/id_rsa.pub | ssh $SSH_USER@$server -oStrictHostKeyChecking=no \
        "if [ ! -f ~/.ssh/authorized_keys ] || ! grep -q ${MASTER_HOSTNAME} ~/.ssh/authorized_keys; then \
             umask 077; test -d .ssh || mkdir -p .ssh; cat >> ~/.ssh/authorized_keys; \
         fi"
    echo "$server ssh-key ..... done"
}

function change_hostname() {
    local server=$1
    local hostname=$2
    echo ${hostname} | ssh $SSH_USER@$server \
    "if ! grep -q ${hostname} /etc/hostname; then \
         sudo su -c 'cat > /etc/hostname'; \
         sudo hostname -F /etc/hostname;
     fi"
    echo "$server $hostname ..... done"
}

function change_hostfile() {
    local server=$1
    cat servers.txt | ssh $SSH_USER@$server \
    "if ! grep -q ${MASTER_HOSTNAME} /etc/hosts; then \
         sudo su -c 'cat >> /etc/hosts';
     fi"
    echo "$server hostfile .... done"
}

echo "setting sshkey ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        set_sshkey $server
    fi
done < $SERVERLIST

echo "changing hostname ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        hostname=$(echo $line | awk '{print $2}')
        change_hostname $server $hostname
    fi
done < $SERVERLIST

echo "changing hosts file ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        change_hostfile $server
    fi
done < $SERVERLIST



[ NTP 세팅 ]
vi 02-ntp-setup.sh
#!/bin/bash

SERVERLIST=$HOME/Documents/scripts/servers.txt
MASTER_IP="192.168.230.138"
SSH_USER="stack"

function ntp_master_setup() {
    local server=$1
    echo $server | ssh ${SSH_USER}@$server \
    "sudo apt-get update; \
     sudo apt-get install -y bridge-utils libvirt-bin ntp ntpdate; \
     if ! grep -q 'server 127.127.1.0' /etc/ntp.conf; then \
         sudo sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 3.ubuntu.pool.ntp.org/server time.bora.net/g' /etc/ntp.conf; \
         sudo sed -i 's/server ntp.ubuntu.com/server 127.127.1.0/g' /etc/ntp.conf; \
         sudo sed -i 's/restrict 127.0.0.1/restrict 192.168.0.0 mask 255.255.0.0 nomodify notrap/g' /etc/ntp.conf; \
         sudo service ntp restart; \
     fi; \
     sudo ntpdate -u time.bora.net; \
     sudo virsh net-destroy default; \
     sudo virsh net-undefine default"
}

function ntp_slave_setup() {
    local server=$1
    echo $server | ssh ${SSH_USER}@$server \
    "sudo apt-get update; \
     sudo apt-get install -y bridge-utils libvirt-bin ntp ntpdate; \
     if ! grep -c ${MASTER_IP} /etc/ntp.conf; then \
         sudo sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf; \
         sudo sed -i 's/server ntp.ubuntu.com/server $MASTER_IP/g' /etc/ntp.conf; \
         sudo service ntp restart; \
     fi; \
     sudo ntpdate -u $MASTER_IP; \
     sudo virsh net-destroy default; \
     sudo virsh net-undefine default"
}

echo "ntp master setting ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        if [[ $server == "$MASTER_IP" ]]; then
            ntp_master_setup $server
        fi
    fi
done < $SERVERLIST

echo "ntp slave setting ........."
while read line; do
    if [[ $(echo $line | cut -c1) != "#" ]]; then
        server=$(echo $line | awk '{print $1}')
        if [[ $server != "$MASTER_IP" ]]; then
            ntp_slave_setup $server
        fi
    fi
done < $SERVERLIST



[ local.conf 파일 ]
mkdir -p ~/Documents/github
cd github
git clone https://github.com/openstack-dev/devstack.git
cd devstack

vi local.conf
[[local|localrc]]
HOST_IP=192.168.75.138
SERVICE_HOST=192.168.75.138
MYSQL_HOST=192.168.75.138
RABBIT_HOST=192.168.75.138
GLANCE_HOSTPORT=192.168.75.138:9292
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret

# Do not use Nova-Network
disable_service n-net

# Neutron service
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

# Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="192.168.75.0/24"
FIXED_RANGE="10.0.0.0/24"
Q_FLOATING_ALLOCATION_POOL=start=192.168.75.193,end=192.168.75.254
PUBLIC_NETWORK_GATEWAY="192.168.75.2"
Q_L3_ENABLED=True
PUBLIC_INTERFACE=ens33

# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

# Nova service
enable_service n-api
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service n-novnc
enable_service n-cauth

# Cinder service
enable_service cinder
enable_service c-api
enable_service c-vol
enable_service c-sch
enable_service c-bak

# Tempest service
enable_service tempest

# Swift service
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account

# Heat service
enable_service heat
enable_service h-api
enable_service h-api-cfn
enable_service h-api-cw
enable_service h-eng

# Enable plugin neutron-lbaas, octavia
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas master
enable_plugin octavia https://git.openstack.org/openstack/octavia

# Enable plugin Magnum
#enable_plugin magnum https://github.com/openstack/magnum master
#enable_plugin magnum-ui https://github.com/openstack/magnum-ui master

# Enable plugin Monasca (Ubuntu 16.04 사용 시 Systemctl 에 맞게 수정 필요)
enable_plugin monasca-api https://github.com/openstack/monasca-api master
enable_plugin monasca-log-api https://github.com/openstack/monasca-log-api master

MONASCA_API_IMPLEMENTATION_LANG=\

${MONASCA_API_IMPLEMENTATION_LANG:-python}

MONASCA_PERSISTER_IMPLEMENTATION_LANG=\

${MONASCA_PERSISTER_IMPLEMENTATION_LANG:-python}

MONASCA_METRICS_DB=${MONASCA_METRICS_DB:-influxdb}



# Cinder configuration
VOLUME_GROUP="cinder-volumes"
VOLUME_NAME_PREFIX="volume-"

# Images
# Use this image when creating test instances
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
# Use this image when working with Orchestration (Heat)
IMAGE_URLS+=",https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2"

KEYSTONE_CATALOG_BACKEND=sql
API_RATE_LIMIT=False
SWIFT_HASH=testing
SWIFT_REPLICAS=1
VOLUME_BACKING_FILE_SIZE=70000M

LOGFILE=$DEST/logs/stack.sh.log

# A clean install every time
RECLONE=yes



[ Compute Node 추가 ]
vi local.conf
[[local|localrc]]
HOST_IP=192.168.75.139
SERVICE_HOST=192.168.75.138
MYSQL_HOST=192.168.75.138
RABBIT_HOST=192.168.75.138
GLANCE_HOSTPORT=192.168.75.138:9292
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret

# Neutron options
PUBLIC_INTERFACE=ens33
ENABLED_SERVICES=n-cpu,n-novnc,rabbit,q-agt

LOGFILE=$DEST/logs/stack.sh.log



[ 설치 실행 ]
./stack.sh


[ 스토리지 마운트 ]
sudo mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8 /opt/stack/data/swift/drives/images/swift.img /opt/stack/data/swift/drives/sdb1

sudo losetup /dev/loop1 /opt/stack/data/cinder-volumes-default-backing-file

sudo losetup /dev/loop2 /opt/stack/data/cinder-volumes-lvmdriver-1-backing-file


[ CPU, Ram, Disk Overcommit 세팅 ]
vi /etc/nova/nova.conf

scheduler_default_filters = ..., CoreFilter          # CoreFilter 추가
cpu_allocation_ratio=50.0
ram_allocation_ratio=16.0
disk_allocation_ratio=50.0


[ 서비스 실행 ]
screen -c stack-screenrc


[ VM 생성 ]
. openrc admin demo


openstack project list
openstack security group list

# default sec group rule 추가
openstack security group rule create --proto icmp --src-ip 0.0.0.0/0 --dst-port -1 --ingress 2d95031b-132b-4d46-aacd-f392cdd8c4fb

openstack security group rule create --proto tcp --src-ip 0.0.0.0/0 --dst-port 1:65535 --ingress 2d95031b-132b-4d46-aacd-f392cdd8c4fb

# private key 생성
openstack keypair create --public-key ~/.ssh/id_rsa.pub magnum-key


openstack flavor list
openstack image list
openstack network list

# nova boot
openstack server create --image 7e688989-e59b-4b20-a562-1de946ee91e9 --flavor m1.tiny  --nic net-id=f57b8f2c-cd67-4d49-b38c-393dbb773c9b  --key-name magnum-key --security-group default test-01


# floating ip 생성 및 서버 할당
openstack ip floating create public
openstack ip floating list
openstack ip floating add 192.168.75.194 test-01


# Router 보기
sudo ip netns
qdhcp-f57b8f2c-cd67-4d49-b38c-393dbb773c9b
qrouter-b46e14d5-4ef5-4bfa-8dc3-463a982688ab


[ tcpdump 방법 ]
# Compute Node
[vm] -> tap:[qbrb97b5aa3-f8 Linux Bridge]:qvbb97b5aa3-f8 -> qvob97b5aa3-f8:[OVS br-int Bridge]:patch-tun -> patch-int:[OVS br-tun Bridge]:br-tun ->

# Network Node
br-tun:OVS br-tun Bridge:patch-int -> patch-tun:OVS br-int Bridge:qr-c163af1e-53 -> 
qr-c163af1e-53:qrouter(Namespace) -> qg-d8187261-68:qg(Namespace) -> 
qg-d8187261-68:OVS br-int Bridge:int-br-ex -> phy-br-ex:OVS br-ex Bridge -> NIC 

sudo tcpdump -n -e -i qbrb97b5aa3-f8 | grep 10.0.0.3
sudo tcpdump -n -e -i qvbb97b5aa3-f8 | grep 10.0.0.3
sudo tcpdump -n -e -i qvob97b5aa3-f8 | grep 10.0.0.3
sudo ip netns exec qrouter-b46e14d5-4ef5-4bfa-8dc3-463a982688ab tcpdump -n -e -i qr-c163af1e-53 | grep 10.0.0.3



[ Magnum k8s 생성 ]
cd ~/Documents/github/devstack/files
wget https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2
glance image-create --name fedora-21-atomic-5 \
                    --visibility public \
                    --disk-format qcow2 \
                    --os-distro fedora-atomic \
                    --container-format bare < fedora-21-atomic-5.qcow2


magnum service-list

magnum baymodel-create --name k8sbaymodel \
                       --image-id fedora-21-atomic-5 \
                       --keypair-id magnum-key \
                       --external-network-id public \
                       --dns-nameserver 8.8.8.8 \
                       --flavor-id m1.small \
                       --docker-volume-size 5 \
                       --network-driver flannel \
                       --coe kubernetes

magnum baymodel-list
magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1

neutron lb-pool-list
neutron lb-vip-list
neutron lb-member-list

magnum bay-list


[ magnum 클러스터 생성 에러 시 수동으로 할 때 삭제해야 할 것 ]
floating ip  삭제 - api-pool-vip,  kube-master, kube-node
openstack ip floating list
sudo ip netns exec qrouter-2f49aeb4-421c-4994-923a-5aafe453fa3d ip a

api.pool.vip 삭제
neutron lb-vip-list
neutron lb-pool-list
neutron lb-member-list

# private network 삭제
openstack network list

# router 삭제, external gateway 삭제
openstack router list
openstack port list
openstack router remove port        (gateway 를 제거)
openstack router remove subnet    (subnet 을 제거)











반응형
Posted by seungkyua@gmail.com
,