2 Licensed under the Apache License, Version 2.0 (the "License"); you may
3 not use this file except in compliance with the License. You may obtain
4 a copy of the License at
6 http://www.apache.org/licenses/LICENSE-2.0
8 Unless required by applicable law or agreed to in writing, software
9 distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
10 WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
11 License for the specific language governing permissions and limitations
14 Convention for heading levels in Integrated Edge Cloud documentation:
16 ======= Heading 0 (reserved for the title in a document)
22 Avoid deeper levels because they do not render well.
25 =================================
26 IEC Reference Foundation Overview
27 =================================
29 This document provides a general description about the reference foundation of IEC.
30 The Integrated Edge Cloud (IEC) will enable new functionalities and business models
31 on the network edge. The benefits of running applications on the network edge are
32 - Better latencies for end users
33 - Less load on network since more data can be processed locally
34 - Fully utilize the computation power of the edge devices
36 .. _Kubernetes: https://kubernetes.io/
37 .. _Calico: https://www.projectcalico.org/
38 .. _Contiv: https://github.com/contiv/vpp
39 .. _OVN-kubernetes: https://github.com/openvswitch/ovn-kubernetes
41 Currently, the chosen operating system(OS) is Ubuntu 16.04 and/or 18.04.
42 The infrastructure orchestration of IEC is based on Kubernetes_, which is a
43 production-grade container orchestration with rich running eco-system.
44 The current container networking mechanism (CNI) choosed for Kubernetes is project
45 Calico, which is a high performance, scalable, policy enabled and widely used container
46 networking solution with rather easy installation and arm64 support. In the future,
47 Contiv/VPP or OVN-Kubernetes would also be candidates for Kubernetes networking.
50 Kubernetes Install for Ubuntu
51 -----------------------------
53 Install Docker as Prerequisite
54 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56 .. _Docker: https://www.docker.com/
57 .. _install: https://docs.docker.com/install/linux/docker-ce/ubuntu/
59 Docker_ is used for Kuberntes docker images management. The installation script for docker
60 version 18.06 is given below. More docker install information can be found in the install_
62 DOCKER_VERSION=18.06.1
64 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
65 sudo apt-key fingerprint 0EBFCD88
66 sudo add-apt-repository \
67 "deb [arch=${ARCH}] https://download.docker.com/linux/ubuntu \
71 sudo apt-get install -y docker-ce=${DOCKER_VERSION}~ce~3-0~ubuntu
74 Disable swap on your machine
75 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
77 Turn off all swap devices and files with::
81 .. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
84 Install Kubernetes with Kubeadm
85 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
87 kubeadm_ helps you bootstrap a minimum viable Kubernetes cluster that conforms
88 to best practices which a preferred installation method for IEC currently.
89 Now we choose v1.13.0 as a current stable version of Kubernetes for arm64.
90 Usually the current host(edge server/gateway)'s management interface is chosen as
91 the Kubeapi-server advertise address which is indicated here as ``$MGMT_IP``.
93 The common installation steps for both Kubernetes master and slave node are given
94 as Linux shell scripts::
97 apt-get update && apt-get install -y apt-transport-https curl
98 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
99 cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
100 deb https://apt.kubernetes.io/ kubernetes-xenial main
103 apt-get install -y kubelet=1.13.0-00 kubeadm=1.13.0-00 kubectl=1.13.0-00
104 apt-mark hold kubelet kubeadm kubectl
105 sysctl net.bridge.bridge-nf-call-iptables=1
107 For host setup as Kubernetes `master`::
109 sudo kubeadm config images pull
110 sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=$MGMT_IP \
111 --service-cidr=172.16.1.0/24
113 To start using your cluster, you need to run (as a regular user)::
116 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
117 sudo chown $(id -u):$(id -g) $HOME/.kube/config
119 or if you are the ``root`` user::
121 export KUBECONFIG=/etc/kubernetes/admin.conf
123 For hosts setup as Kubernetes `slave`::
125 kubeadm join --token <token> <master-ip>:6443 --discovery-token-ca-cert-hash sha256:<hash>
127 in which the token is given in the master's ``kubeadm init``.
129 or using following command which will skip ca-cert verification::
131 kubeadm join --token <token> <master_ip>:6443 --discovery-token-unsafe-skip-ca-verification
133 After the `slave` joining the Kubernetes cluster, in the master node, you could check the cluster
134 node with the command::
138 Install the Calico CNI Plugin to Kubernetes Cluster
139 ---------------------------------------------------
141 Now we install a Calico_ network add-on so that Kubernetes pods can communicate with each other.
142 The network must be deployed before any applications. Kubeadm only supports Container Networking
143 Interface(CNI) based networks for which Calico has supported.
145 Install the Etcd Database
146 ~~~~~~~~~~~~~~~~~~~~~~~~~
150 kubectl apply -f https://raw.githubusercontent.com/Jingzhao123/arm64TemporaryCalico/temporay_arm64/
151 v3.3/getting-started/kubernetes/installation/hosted/etcd-arm64.yaml
153 Install the RBAC Roles required for Calico
154 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/rbac.yaml
160 Install Calico to system
161 ~~~~~~~~~~~~~~~~~~~~~~~~
163 Firstly, we should get the configuration file from web site and modify the corresponding image
164 from amd64 to arm64 version. Then, by using kubectl, the calico pod will be created.
166 wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/calico.yaml
168 Since the "quay.io/calico" image repo does not support does not multi-arch, we have
169 to replace the “quay.io/calico” image path to "calico" which supports multi-arch.
171 sed -i "s/quay.io\/calico/calico/" calico.yaml
173 Deploy the Calico using following command::
175 kubectl apply -f calico.yaml
179 In calico.yaml file, there is an option "IP_AUTODETECTION_METHOD" about choosing
180 network interface. The default value is "first-found" which means the first valid
181 IP address (except local interface, docker bridge). So if the number of network-interface
182 is more than 1 on your server, you should configure it depends on your networking
183 environments. If it does not configure it properly, there are some error about
184 calico-node pod: "BGP not established with X.X.X.X".
187 Remove the taints on master node
188 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
192 kubectl taint nodes --all node-role.kubernetes.io/master-
195 Verification for the Work of Kubernetes
196 ---------------------------------------
198 Now we can verify the work of Kubernetes and Calico with Kubernets pod and service creation and accessing
199 based on Nginx which is a widely used web server.
201 Firstly, create a file named nginx-app.yaml to describe a Pod and service by::
203 $ cat <<EOF >~/nginx-app.yaml
220 kind: ReplicationController
237 then test the Kubernetes working status with the script::
240 kubectl create -f ~/nginx-app.yaml
248 r=$(kubectl get pods | grep Running | wc -l)
251 svcip=$(kubectl get services nginx -o json | grep clusterIP | cut -f4 -d'"')
254 kubectl delete -f ./examples/nginx-app.yaml
255 kubectl delete -f ./nginx-app.yaml
260 .. _Helm: https://github.com/helm/helm
262 Helm Install on Arm64
263 ---------------------
265 Helm_ is a tool for managing Kubernetes charts. Charts are packages of pre-configured
266 Kubernetes resources. The installation of Helm on arm64 is as followes::
268 wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-arm64.tar.gz
269 xvf helm-v2.12.3-linux-arm64.tar.gz
270 sudo cp linux-arm64/helm /usr/bin
271 sudo cp linux-arm64/tiller /usr/bin
277 We would like to provide a walk through shell script to automate the installation of Kubernetes
278 and Calico in the future. But this README is still useful for IEC developers and users.
280 For issues or anything on the reference foundation stack of IEC, you could contact:
282 Trevor Tao: trevor.tao@arm.com
284 Jingzhao Ni: jingzhao.ni@arm.com
286 Jianlin Lv: jianlin.lv@arm.com