--- /dev/null
+Foreword:
+---------
+This is a set of Openstack Heat templates which creates a simple topology of
+virtual machines to be used to deploy Kubernetes and Calico.
+
+It consists of one master VM and 2 optional slave VMs. In the future it might
+be possible to configure the number of slaves but for now it is fixed.
+
+
+Prerequisites:
+--------------
+In order to run these templates, you need an Openstack deployment (at least
+Ocata version, later is preferred), either a single node installation or
+multinode).
+
+The job of the Heat stacks is to spawn either 1 or 3 VMs which will form a
+Kubernetes cluster. The base image is required to exist, by default the stacks
+will expect a Glance image named "xenial" to exist.
+
+It is required to upload an image prior to using the templates. Currently the
+templates operate on the assumption that an Ubuntu Xenial cloud image will be
+used, as such it installs the required packages using apt.
+
+See the main control.sh script to start/stop the set of stacks and various
+run-time options, like DPDK support.
+
+
+Usage:
+------
+For a DPDK enabled deployment, it is usually necessary to pass an extra metadata
+in the flavor (e.g. hw:mem_page_size=large). For DPDK usecase you also have to
+create a host aggregate which has the pinned=true metadata and add the desired
+compute nodes to this host aggregate.
+
+For floating IP support, you need to specify the name of the external network,
+otherwise the script will use the default "external".
+
+Example of running the script on a DPDK deployment:
+ has_dpdk=true external_net=ext_net ./control.sh start
+
+The set of templates currently define three stacks, which can be skipped from
+starting or stopping, if so desired. This makes it useful to skip deleting
+the nets or for starting the setup using one host only (master). E.g:
+ skip_k8s_net=1 ./control.sh stop
+ skip_k8s_slaves=1 ./control.sh start
+
+Networking:
+-----------
+Have a look at k8s_net.yaml for the network configurations.
+
+Currently the Heat templates define 2 networks:
+- k8s_mgmt_net: this is primarily used for sshing into the node, but it also
+ serves as the access to the external network. Thus the floating IPs (which are
+ activated by default) will be assigned to the ports from this network
+- k8s_int_net: kubernetes internal network, which is used by the nodes to join
+ the cluster.
+
+Separating the traffic into two networks makes sense in an Openstack environment
+by hiding the internal traffic from the outside world.
+Thus, for accessing the services inside the clusters, it will be required to use
+the floating IPs assigned to the Kubernetes servers.
+
+In terms of CNI, there will be two additional networks involved, which are
+defined in k8s_net.yaml. These networks are not visible from outside of the Heat
+stacks, Kubernetes and Calico will encapsulate packets on these networks using
+IP-in-IP. In fact, to Openstack these are virtual networks, the only reason to
+have them in k8s_pod_net.yaml is to have a central view of all the network
+parameters.
+The two networks are described by Heat stack output variables, as follows:
+- k8s_pod_net_cidr: the POD network, passed to kubeadm init --pod-network-cidr
+- k8s_svc_net_cidr: the service network, passed to kubeadm init --service-cidr
+
+
+Calico networking:
+------------------
+In terms of Calico, k8s_net.yaml defines yet another stack output variable:
+- k8s_cluster_ip: corresponds to the etcd_endpoints parameter in calico.yaml
+
+
+Network security:
+-----------------
+For the moment, for ease of operation, the stacks ports will have port security
+disabled. It should be possible to enable it, and provide a set of security
+groups rule to allow all TCP and UDP traffic for the internal network.
+
+
+Cluster setup:
+--------------
+The clusters configures itself automatically and installs the base IEC platform
+together with the needed resources for Helm. SEBA or other applications will
+have to be installed manually afterwards.
+
+For the K8s cluster setup, the Master VM will print the join command in a file
+in /home/ubuntu/joincmd. Then the slave VMs will connect to the Master VM using
+ssh and read the joincmd file.
+
+All of these are achieved by using cloud-init scripts that run at startup. You
+can follow the progress of the init scripts by looking at the console log, which
+right now are very verbose.
+After the setup is completed, you can look for the joincmd string in the output.
+
+
+Using the cluster:
+------------------
+Once the setup is complete, you can login to the k8s_master VM. Use the Horizon
+interface or ssh into the floating ip, using the default credentials:
+ubuntu:ubuntu
+
+A public key is also generated, and a private key saved in a file called
+ak-key.pem but for now password logins are permitted for ease of operation.
+
+Once logged into the master VM, you need to become root.
+ sudo su -
+
+From here it is possible to run the usual kubernetes and helm tools, thanks to
+having the KUBECONFIG env variable exported through /root/.profile.
+
+It is also possible to use kubernetes as non-root, in which case you need to
+manually create ~/.kube/ and copy the kubernetes config:
+ mkdir -p $HOME/.kube
+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+The most basic operation you can run is verifying the nodes in the cluster:
+ kubectl get nodes
+ kubcetl describe node k8s-master
--- /dev/null
+#!/bin/sh
+
+# shellcheck disable=SC2086
+
+# set DPDK if available
+has_dpdk=${has_dpdk:-"false"}
+
+################################################################
+# Stack parameters
+base_img=${base_img:-"xenial"}
+key_name=${key_name:-"ak-key"}
+k8s_master_vol=${k8s_master_vol:-"k8s_master_vol"}
+external_net=${external_net:-"external"}
+k8s_user=${k8s_user:-"ubuntu"}
+k8s_password=${k8s_password:-"ubuntu"}
+has_dpdk_param=
+
+floating_ip_param="--parameter public_ip_pool=$external_net"
+
+if [ "$has_dpdk" = true ]; then
+ has_dpdk_param="--parameter has_dpdk=true"
+fi
+
+################################################################
+
+set -ex
+
+retries=5
+
+if [ -z "$OS_AUTH_URL" ]; then
+ echo "OS_AUTH_URL not set; aborting"
+ exit 1
+fi
+
+if ! [ -f ak-key.pem ]
+then
+ nova keypair-add "$key_name" > "$key_name".pem
+ chmod 600 ak-key.pem
+fi
+
+skip_k8s_net=${skip_k8s_net:-}
+skip_k8s_master=${skip_k8s_master:-}
+skip_k8s_slaves=${skip_k8s_slaves:-}
+
+stack_k8s_net=
+stack_k8s_master=
+stack_k8s_slaves=
+
+case $1 in
+start|stop)
+ cmd=$1
+ shift
+ ;;
+restart)
+ shift
+ tries=0
+ while ! $0 stop "$@"; do
+ tries=$((tries+1))
+ if [ $tries -gt $retries ]; then
+ echo "Unable to stop demo, exiting"
+ exit 1
+ fi
+ done
+ $0 start "$@"
+ exit $?
+ ;;
+*)
+ echo "Control script for managing a simple K8s cluster of VMs using Heat"
+ echo "Available stacks:"
+ echo " - net - all the required networks and subnets"
+ echo " - k8s_master - K8s master VM"
+ echo " - k8s_slaves - configurable number of K8s slave VMs"
+ echo "Use skip_<stack> to skip starting/stopping stacks, e.g."
+ echo "#:~ > skip_k8s_net=1 ./$0 stop"
+ echo "usage: $0 [start|stop] [k8s_net] [k8s_master] [k8s_slaves]"
+ exit 1
+ ;;
+esac
+
+if [ $# -gt 0 ]; then
+ skip_k8s_net=1
+ while [ $# -gt 0 ]; do
+ eval unset skip_"$1"
+ shift
+ done
+fi
+
+# check OS status
+tries=0
+while ! openstack compute service list > /dev/null 2>&1; do
+ tries=$((tries+1))
+ if [ $tries -gt $retries ]; then
+ echo "Unable to check Openstack health, exiting"
+ exit 2
+ fi
+ sleep 5
+done
+
+for stack in $(openstack stack list -f value -c "Stack Name"); do
+ echo "$stack" | grep -sq -e '^[a-zA-Z0-9_]*$' && eval stack_"$stack"=1
+done
+
+case $cmd in
+start)
+ if [ -z "$stack_k8s_net" ] && [ -z "$skip_k8s_net" ]; then
+ echo "Starting k8s_net"
+ openstack stack create --wait \
+ --parameter external_net="$external_net" \
+ -t k8s_net.yaml k8s_net
+ # Might need to wait for the networks to become available
+ # sleep 5
+ fi
+
+# master_vol=$(openstack volume show $k8s_master_vol -f value -c id)
+# --parameter volume_id=$master_vol \
+
+ k8s_master_ip=$(openstack stack output show k8s_net k8s_master_ip -f value -c output_value)
+ k8s_pod_net_cidr=$(openstack stack output show k8s_net k8s_pod_net_cidr -f value -c output_value)
+ k8s_svc_net_cidr=$(openstack stack output show k8s_net k8s_svc_net_cidr -f value -c output_value)
+ k8s_cluster_ip=$(openstack stack output show k8s_net k8s_cluster_ip -f value -c output_value)
+ if [ -z "$stack_k8s_master" ] && [ -z "$skip_k8s_master" ]; then
+ echo "Starting Kubernetes master"
+ openstack stack create --wait \
+ --parameter key_name="$key_name" \
+ --parameter k8s_master_ip="$k8s_master_ip" \
+ --parameter k8s_pod_net_cidr="$k8s_pod_net_cidr" \
+ --parameter k8s_svc_net_cidr="$k8s_svc_net_cidr" \
+ --parameter k8s_cluster_ip="$k8s_cluster_ip" \
+ --parameter k8s_user="$k8s_user" \
+ --parameter k8s_password="$k8s_password" \
+ $floating_ip_param \
+ $has_dpdk_param \
+ -t k8s_master.yaml k8s_master
+ fi
+
+ if [ -z "$stack_k8s_slaves" ] && [ -z "$skip_k8s_slaves" ]; then
+ echo "Starting Kubernetes slaves"
+ openstack stack create --wait \
+ --parameter key_name="$key_name" \
+ --parameter k8s_master_ip="$k8s_master_ip" \
+ --parameter k8s_pod_net_cidr="$k8s_pod_net_cidr" \
+ --parameter k8s_svc_net_cidr="$k8s_svc_net_cidr" \
+ --parameter k8s_cluster_ip="$k8s_cluster_ip" \
+ --parameter k8s_user="$k8s_user" \
+ --parameter k8s_password="$k8s_password" \
+ $floating_ip_param \
+ $has_dpdk_param \
+ -t k8s_slaves.yaml k8s_slaves
+ fi
+
+ openstack stack list
+ ;;
+stop)
+ if [ -n "$stack_k8s_slaves" ] && [ -z "$skip_k8s_slaves" ]; then
+ echo "Stopping Kubernetes slaves"
+ openstack stack delete --yes --wait k8s_slaves
+ fi
+
+ if [ -n "$stack_k8s_master" ] && [ -z "$skip_k8s_master" ]; then
+ echo "Stopping Kubernetes master"
+ openstack stack delete --yes --wait k8s_master
+ fi
+
+ if [ -n "$stack_k8s_net" ] && [ -z "$skip_k8s_net" ]; then
+ echo "Stopping k8s_net"
+ openstack stack delete --yes --wait k8s_net
+ fi
+
+ openstack stack list
+ ;;
+esac
--- /dev/null
+# yamllint disable-line rule:document-start
+heat_template_version: 2016-10-14
+
+description: "K8 master VM"
+
+parameters:
+ key_name:
+ type: string
+ description: management ssh key
+ default: 'ak-key'
+
+ k8s_master_hostname:
+ type: string
+ description: Hostname of the K8s master node
+ default: "k8s-master"
+
+ k8s_master_vol:
+ type: string
+ default: "k8s_master_vol"
+
+ k8s_mgmt_net:
+ type: string
+ description: management network
+ default: "k8s_mgmt_net"
+
+ k8s_int_net:
+ type: string
+ description: Kubernetes service network
+ default: "k8s_int_net"
+
+ k8s_master_ip:
+ type: string
+ description: k8s_master management IP (fixed)
+
+ k8s_pod_net_cidr:
+ type: string
+ description: k8 pod_net cidr used for setting up k8s cluster
+
+ k8s_svc_net_cidr:
+ type: string
+ description: k8 pod_net cidr used for setting up k8s cluster
+
+ k8s_cluster_ip:
+ type: string
+ description: k8 service IP addr used for setting up k8s cluster
+
+ k8s_user:
+ type: string
+ description: User id to connect to the VMs (ssh)
+ default: "ubuntu"
+
+ k8s_password:
+ type: string
+ description: Access password for the user to connect to the VMs (ssh)
+ default: "ubuntu"
+
+ public_ip_pool:
+ type: string
+ description: Public IP pool
+ default: "external"
+
+ enable_floating_ip:
+ type: boolean
+ default: true
+
+ has_dpdk:
+ type: boolean
+ default: false
+
+conditions:
+ cond_floating_ip: {equals: [{get_param: enable_floating_ip}, true]}
+ has_dpdk: {equals: [{get_param: has_dpdk}, true]}
+
+resources:
+ flavor:
+ type: OS::Nova::Flavor
+ properties:
+ ram: 16384
+ vcpus: 4
+ disk: 10
+
+ flavor_dpdk:
+ type: OS::Nova::Flavor
+ properties:
+ ram: 16384
+ vcpus: 8
+ disk: 40
+ extra_specs:
+ "hw:mem_page_size": large
+ "hw:cpu_policy": dedicated
+ "aggregate_instance_extra_specs:pinned": "true"
+ "hw:numa_node.0": 0
+ "hw:numa_nodes": 1
+
+ server_fip:
+ type: OS::Nova::FloatingIP
+ condition: cond_floating_ip
+ properties:
+ pool: {get_param: public_ip_pool}
+
+ server_association_fip:
+ type: OS::Nova::FloatingIPAssociation
+ condition: cond_floating_ip
+ properties:
+ floating_ip: {get_resource: server_fip}
+ server_id: {get_resource: server}
+
+ mgmt_port:
+ type: OS::Neutron::Port
+ properties:
+ network: {get_param: k8s_mgmt_net}
+ port_security_enabled: false
+ # security_groups:
+ # - {get_resource: server_security_group}
+
+ int_net_port:
+ type: OS::Neutron::Port
+ properties:
+ network: {get_param: k8s_int_net}
+ port_security_enabled: false
+ # security_groups:
+ # - {get_resource: server_security_group}
+ fixed_ips: [{"ip_address": {get_param: k8s_master_ip}}]
+
+ server_cloudinit_config:
+ type: OS::Heat::CloudConfig
+ properties:
+ cloud_config:
+ password: ubuntu
+ chpasswd: {expire: false}
+ ssh_pwauth: true
+ manage_etc_hosts: true
+ disable_root: false
+
+ server_config:
+ type: OS::Heat::SoftwareConfig
+ properties:
+ config:
+ str_replace:
+ template: {get_file: k8s_master_init.sh}
+ params:
+ k8s_master_hostname: {get_param: k8s_master_hostname}
+ k8s_master_ip: {get_param: k8s_master_ip}
+ k8s_pod_net_cidr: {get_param: k8s_pod_net_cidr}
+ k8s_svc_net_cidr: {get_param: k8s_svc_net_cidr}
+ k8s_cluster_ip: {get_param: k8s_cluster_ip}
+ k8s_user: {get_param: k8s_user}
+
+ server_user_data:
+ type: OS::Heat::MultipartMime
+ properties:
+ parts:
+ - config: {get_resource: server_cloudinit_config}
+ - config: {get_resource: server_config}
+
+ server_security_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ description: Security group for ssh and icmp
+ name: test-security-group
+ rules: [
+ {remote_ip_prefix: 0.0.0.0/0,
+ protocol: tcp,
+ port_range_min: 1,
+ port_range_max: 65535},
+ {remote_ip_prefix: 0.0.0.0/0,
+ protocol: udp,
+ port_range_min: 1,
+ port_range_max: 65535},
+ {remote_ip_prefix: 0.0.0.0/0, protocol: icmp}
+ ]
+
+ # k8s_master_volume:
+ # type: OS::Cinder::Volume
+ # properties:
+ # description: 'user: Volume for Node1'
+ # image: "xenial"
+ # name: {get_param: k8s_master_vol}
+ # size: 20
+ # availability_zone: nova
+
+ server:
+ type: OS::Nova::Server
+ properties:
+ name: k8s-master
+ key_name: {get_param: key_name}
+ flavor: {get_resource: {if: ["has_dpdk", "flavor_dpdk", "flavor"]}}
+ image: "xenial"
+ # block_device_mapping: [
+ # {device_name: "vda",
+ # volume_id:
+ # {get_resource: k8s_master_volume},
+ # delete_on_termination: true
+ # }
+ # ]
+ user_data: {get_resource: server_user_data}
+ user_data_format: RAW
+ networks:
+ - port: {get_resource: mgmt_port}
+ - port: {get_resource: int_net_port}
--- /dev/null
+#!/bin/bash
+set -ex
+sed -i -e 's/^\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\)\([\t ]\+\)\(k8s_master_hostname.*$\)/k8s_master_ip\2\3/g' /etc/hosts
+apt update
+pwd
+# Looks like cloud-init does not set $HOME, so we can hack it into thinking it's /root
+HOME=${HOME:-/root}
+export HOME
+git clone https://gerrit.akraino.org/r/iec
+cd iec/src/foundation/scripts
+./k8s_common.sh
+./k8s_master.sh k8s_master_ip k8s_pod_net_cidr k8s_svc_net_cidr
+. ${HOME}/.profile
+./setup-cni.sh k8s_cluster_ip k8s_pod_net_cidr
+token=$(kubeadm token list --skip-headers | awk 'END{print $1}')
+shaid=$(openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d ' ' -f1)
+echo "kubeadm join k8s_master_ip:6443 --token $token --discovery-token-ca-cert-hash sha256:$shaid" > /home/k8s_user/joincmd
+cat /home/k8s_user/joincmd
+./nginx.sh
+./helm.sh
--- /dev/null
+# yamllint disable-line rule:document-start
+heat_template_version: 2015-04-30
+
+parameters:
+ external_net:
+ type: string
+ description: Name of the external network
+ default: "external"
+
+resources:
+
+ k8s_mgmt_net:
+ type: OS::Neutron::Net
+ properties:
+ name: "k8s_mgmt_net"
+
+ k8s_mgmt_subnet:
+ type: OS::Neutron::Subnet
+ properties:
+ network_id: {get_resource: k8s_mgmt_net}
+ cidr: "192.168.11.0/24"
+ gateway_ip: 192.168.11.254
+ ip_version: 4
+
+ k8s_mgmt_router:
+ type: OS::Neutron::Router
+ properties:
+ external_gateway_info: {network: {get_param: external_net}}
+
+ k8s_mgmt_interface:
+ type: OS::Neutron::RouterInterface
+ properties:
+ router_id: {get_resource: k8s_mgmt_router}
+ subnet: {get_resource: k8s_mgmt_subnet}
+
+ k8s_int_net:
+ type: OS::Neutron::Net
+ properties:
+ name: "k8s_int_net"
+
+ k8s_int_subnet:
+ type: OS::Neutron::Subnet
+ properties:
+ network_id: {get_resource: k8s_int_net}
+ cidr: "172.16.10.0/24"
+ gateway_ip: null
+ allocation_pools:
+ - start: 172.16.10.10
+ end: 172.16.10.253
+ ip_version: 4
+ enable_dhcp: false
+
+outputs:
+ k8s_master_ip:
+ value: "172.16.10.36"
+ k8s_pod_net_cidr:
+ value: "100.100.0.0/16"
+ k8s_svc_net_cidr:
+ value: "172.16.1.0/24"
+ k8s_cluster_ip:
+ value: "172.16.1.136"
--- /dev/null
+# yamllint disable-line rule:document-start
+heat_template_version: 2016-10-14
+
+description: "K8 slaves VM"
+
+parameters:
+ key_name:
+ type: string
+ description: management ssh key
+ default: 'ak-key'
+
+ k8s_slave0_hostname:
+ type: string
+ description: Hostname of the K8s slave0 node
+ default: "k8s-slave0"
+
+ k8s_slave1_hostname:
+ type: string
+ description: Hostname of the K8s slave0 node
+ default: "k8s-slave1"
+
+ k8s_mgmt_net:
+ type: string
+ description: management network
+ default: "k8s_mgmt_net"
+
+ k8s_int_net:
+ type: string
+ description: Kubernetes service network
+ default: "k8s_int_net"
+
+ k8s_master_ip:
+ type: string
+ description: k8s_master management IP (fixed)
+
+ k8s_slave0_ip:
+ type: string
+ description: k8s_master management IP (fixed)
+ default: "172.16.10.37"
+
+ k8s_slave1_ip:
+ type: string
+ description: k8s_master management IP (fixed)
+ default: "172.16.10.38"
+
+ k8s_pod_net_cidr:
+ type: string
+ description: k8 pod_net cidr used for setting up k8s cluster
+
+ k8s_svc_net_cidr:
+ type: string
+ description: k8 pod_net cidr used for setting up k8s cluster
+
+ k8s_cluster_ip:
+ type: string
+ description: k8 service IP addr used for setting up k8s cluster
+
+ k8s_user:
+ type: string
+ description: User id to connect to the VMs (ssh)
+ default: "ubuntu"
+
+ k8s_password:
+ type: string
+ description: Access password for the user to connect to the VMs (ssh)
+ default: "ubuntu"
+
+ public_ip_pool:
+ type: string
+ description: Public IP pool
+ default: "external"
+
+ enable_floating_ip:
+ type: boolean
+ default: true
+
+ has_dpdk:
+ type: boolean
+ default: false
+
+conditions:
+ cond_floating_ip: {equals: [{get_param: enable_floating_ip}, true]}
+ has_dpdk: {equals: [{get_param: has_dpdk}, true]}
+
+resources:
+ flavor:
+ type: OS::Nova::Flavor
+ properties:
+ ram: 10240
+ vcpus: 4
+ disk: 10
+
+ flavor_dpdk:
+ type: OS::Nova::Flavor
+ properties:
+ ram: 10240
+ vcpus: 8
+ disk: 40
+ extra_specs:
+ "hw:mem_page_size": large
+ "hw:cpu_policy": dedicated
+ "aggregate_instance_extra_specs:pinned": "true"
+ "hw:numa_node.0": 0
+ "hw:numa_nodes": 1
+
+ server_cloudinit_config:
+ type: OS::Heat::CloudConfig
+ properties:
+ cloud_config:
+ password: ubuntu
+ chpasswd: {expire: false}
+ ssh_pwauth: true
+ manage_etc_hosts: true
+ disable_root: false
+
+ server_config0:
+ type: OS::Heat::SoftwareConfig
+ properties:
+ config:
+ str_replace:
+ template: {get_file: k8s_slaves_init.sh}
+ params:
+ k8s_slave_hostname: {get_param: k8s_slave0_hostname}
+ k8s_master_ip: {get_param: k8s_master_ip}
+ k8s_slave_ip: {get_param: k8s_slave0_ip}
+ k8s_pod_net_cidr: {get_param: k8s_pod_net_cidr}
+ k8s_svc_net_cidr: {get_param: k8s_svc_net_cidr}
+ k8s_cluster_ip: {get_param: k8s_cluster_ip}
+ k8s_user: {get_param: k8s_user}
+ k8s_password: {get_param: k8s_password}
+
+ server_user_data0:
+ type: OS::Heat::MultipartMime
+ properties:
+ parts:
+ - config: {get_resource: server_cloudinit_config}
+ - config: {get_resource: server_config0}
+
+ server_config1:
+ type: OS::Heat::SoftwareConfig
+ properties:
+ config:
+ str_replace:
+ template: {get_file: k8s_slaves_init.sh}
+ params:
+ k8s_slave_hostname: {get_param: k8s_slave1_hostname}
+ k8s_master_ip: {get_param: k8s_master_ip}
+ k8s_slave_ip: {get_param: k8s_slave1_ip}
+ k8s_pod_net_cidr: {get_param: k8s_pod_net_cidr}
+ k8s_svc_net_cidr: {get_param: k8s_svc_net_cidr}
+ k8s_cluster_ip: {get_param: k8s_cluster_ip}
+ k8s_user: {get_param: k8s_user}
+ k8s_password: {get_param: k8s_password}
+
+ server_user_data1:
+ type: OS::Heat::MultipartMime
+ properties:
+ parts:
+ - config: {get_resource: server_cloudinit_config}
+ - config: {get_resource: server_config1}
+
+ server_security_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ description: Security group for ssh and icmp
+ name: test-security-group
+ rules: [
+ {remote_ip_prefix: 0.0.0.0/0,
+ protocol: tcp,
+ port_range_min: 1,
+ port_range_max: 65535},
+ {remote_ip_prefix: 0.0.0.0/0,
+ protocol: udp,
+ port_range_min: 1,
+ port_range_max: 65535},
+ {remote_ip_prefix: 0.0.0.0/0, protocol: icmp}
+ ]
+
+ slave_fip0:
+ type: OS::Nova::FloatingIP
+ condition: cond_floating_ip
+ properties:
+ pool: {get_param: public_ip_pool}
+
+ server_association_fip0:
+ type: OS::Nova::FloatingIPAssociation
+ condition: cond_floating_ip
+ properties:
+ floating_ip: {get_resource: slave_fip0}
+ server_id: {get_resource: slave0}
+
+ slave_fip1:
+ type: OS::Nova::FloatingIP
+ condition: cond_floating_ip
+ properties:
+ pool: {get_param: public_ip_pool}
+
+ server_association_fip1:
+ type: OS::Nova::FloatingIPAssociation
+ condition: cond_floating_ip
+ properties:
+ floating_ip: {get_resource: slave_fip1}
+ server_id: {get_resource: slave1}
+
+ mgmt_port0:
+ type: OS::Neutron::Port
+ properties:
+ network: {get_param: k8s_mgmt_net}
+ port_security_enabled: false
+ # security_groups:
+ # - {get_resource: server_security_group}
+
+ int_net_port0:
+ type: OS::Neutron::Port
+ properties:
+ network: {get_param: k8s_int_net}
+ port_security_enabled: false
+ # security_groups:
+ # - {get_resource: server_security_group}
+ fixed_ips: [{"ip_address": {get_param: k8s_slave0_ip}}]
+
+ mgmt_port1:
+ type: OS::Neutron::Port
+ properties:
+ network: {get_param: k8s_mgmt_net}
+ port_security_enabled: false
+ # security_groups:
+ # - {get_resource: server_security_group}
+
+ int_net_port1:
+ type: OS::Neutron::Port
+ properties:
+ network: {get_param: k8s_int_net}
+ port_security_enabled: false
+ # security_groups:
+ # - {get_resource: server_security_group}
+ fixed_ips: [{"ip_address": {get_param: k8s_slave1_ip}}]
+
+ slave0:
+ type: OS::Nova::Server
+ properties:
+ name: "k8s-slave0"
+ key_name: {get_param: key_name}
+ flavor: {get_resource: {if: ["has_dpdk", "flavor_dpdk", "flavor"]}}
+ image: "xenial"
+ user_data: {get_resource: server_user_data0}
+ user_data_format: RAW
+ # security_groups:
+ # - {get_resource: server_security_group}
+ networks:
+ - port: {get_resource: mgmt_port0}
+ - port: {get_resource: int_net_port0}
+
+ slave1:
+ type: OS::Nova::Server
+ properties:
+ name: "k8s-slave1"
+ key_name: {get_param: key_name}
+ flavor: {get_resource: {if: ["has_dpdk", "flavor_dpdk", "flavor"]}}
+ image: "xenial"
+ user_data: {get_resource: server_user_data1}
+ user_data_format: RAW
+ # security_groups:
+ # - {get_resource: server_security_group}
+ networks:
+ - port: {get_resource: mgmt_port1}
+ - port: {get_resource: int_net_port1}
--- /dev/null
+#!/bin/bash
+set -ex
+echo "K8s Master IP is k8s_master_ip"
+sudo sed -i -e 's/^\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\)\([\t ]\+\)\(k8s_slave_hostname.*$\)/k8s_slave_ip\2\3/g' /etc/hosts
+apt update
+apt install sshpass
+pwd
+git clone https://gerrit.akraino.org/r/iec
+cd iec/src/foundation/scripts
+./k8s_common.sh
+joincmd=$(sshpass -p k8s_password ssh -o StrictHostKeyChecking=no k8s_user@k8s_master_ip 'for i in {1..300}; do if [ -f /home/ubuntu/joincmd ]; then break; else sleep 1; fi; done; cat /home/ubuntu/joincmd')
+eval sudo $joincmd