Add Multus with Calico and SRIOV CNI support 27/2227/2
authorTrevor Tao <trevor.tao@arm.com>
Fri, 7 Feb 2020 12:21:38 +0000 (20:21 +0800)
committerTrevor Tao <trevor.tao@arm.com>
Fri, 7 Feb 2020 12:30:00 +0000 (20:30 +0800)
This commit provides Kubernetes networking support for Multus
with SRIOV CNI/Calico CNI support both on arm64 and amd64.

A special configuration file for Broadcom smartNIC Stingray
PS225 is provided as an example.

Here the Calico would be provided as the default CNIs for
any pods without explicit annotations.

Updated the README.md to add more explanations for current
work and reflect the change.

For detailed information, please refer the README.md in the
commit.

Signed-off-by: Trevor Tao <trevor.tao@arm.com>
Change-Id: I60ac7d636be8e272cd82c3697833947850e7f49e
Signed-off-by: Trevor Tao <trevor.tao@arm.com>
12 files changed:
src/foundation/scripts/cni/multus/README.md
src/foundation/scripts/cni/multus/multus-sriov-calico/calico-daemonset-k8s-v1.16.yaml [new file with mode: 0644]
src/foundation/scripts/cni/multus/multus-sriov-calico/calico-daemonset.yaml [new file with mode: 0644]
src/foundation/scripts/cni/multus/multus-sriov-calico/configMap.yaml [new file with mode: 0644]
src/foundation/scripts/cni/multus/multus-sriov-calico/install-k8s-v1.16.sh [new file with mode: 0755]
src/foundation/scripts/cni/multus/multus-sriov-calico/install.sh [new file with mode: 0755]
src/foundation/scripts/cni/multus/multus-sriov-calico/multus-sriov-calico-daemonsets-k8s-v1.16.yaml [new file with mode: 0644]
src/foundation/scripts/cni/multus/multus-sriov-calico/multus-sriov-calico-daemonsets.yaml [new file with mode: 0644]
src/foundation/scripts/cni/multus/multus-sriov-calico/sriov-crd.yaml [new file with mode: 0644]
src/foundation/scripts/cni/multus/multus-sriov-calico/uninstall-k8s-v1.16.sh [new file with mode: 0755]
src/foundation/scripts/cni/multus/multus-sriov-calico/uninstall.sh [new file with mode: 0755]
src/foundation/scripts/setup-cni.sh

index 6a77f6e..214b4e8 100644 (file)
@@ -32,7 +32,11 @@ For more information, please refer the above links in the [Introduction](#Introd
 
 ##Installation
 
 
 ##Installation
 
-There are 4 yaml files give:
+To install the Multus-SRIOV-Calico or Multus-SRIOV-Flannel, the CNI_TYPE field should be set to 'multus-sriov-calico'
+or 'multus-sriov-flannel' correspondingly in the IEC's installation configuration file named as 'config', then do the
+CNI installation by setup-cni.sh.
+
+For SRIOV CNI with Flannel by Multus CNI, there are 4 yaml files give:
 1. configMap.yaml:
 The resource list configuration file for SRIOV device plugin
 1. multus-sriov-flannel-daemonsets.yaml
 1. configMap.yaml:
 The resource list configuration file for SRIOV device plugin
 1. multus-sriov-flannel-daemonsets.yaml
@@ -42,9 +46,21 @@ The Flannel CNI installation file
 1. sriov-crd.yaml
 The SRIOV CNI configuration file for the attached SRIOV interface resource.
 
 1. sriov-crd.yaml
 The SRIOV CNI configuration file for the attached SRIOV interface resource.
 
+For SRIOV CNI with Calico by Multus CNI, there are 4 yaml files give:
+1. configMap.yaml:
+The resource list configuration file for SRIOV device plugin
+1. multus-sriov-calico-daemonsets.yaml
+The Multus, SRIOV device plugin&CNI configuration file
+1. calico-daemonset.yaml
+The Flannel CNI installation file
+1. sriov-crd.yaml
+The SRIOV CNI configuration file for the attached SRIOV interface resource.
+
 Usually users should modify the `configMap.yaml` and `sriov-crd.yaml` with their own corresponding networking configuration before doing the installation.
 
 Usually users should modify the `configMap.yaml` and `sriov-crd.yaml` with their own corresponding networking configuration before doing the installation.
 
-A quick installation script is given as `install.sh`, and the uninstallation could be done by call the `uninstall.sh`.
+A quick installation script is given as `install.sh`, and the uninstallation could be done by call the `uninstall.sh`.Before you call the install.sh manually to do the install, you should set your desired POD_NETWORK or other parameters in the installation yaml files as we do in the setup-cni.sh.
+
+For Kubernets version >=1.16, there are some changes for Kubernetes API. There is a sample installation script for multus-sriov-calico named as install-k8s-v1.16.sh, which could be used as a sample when your K8s version >=1.16.
 
 **The `install.sh` should be called after the Kubernetes cluster had been installed but before installing the CNIs.**
 
 
 **The `install.sh` should be called after the Kubernetes cluster had been installed but before installing the CNIs.**
 
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/calico-daemonset-k8s-v1.16.yaml b/src/foundation/scripts/cni/multus/multus-sriov-calico/calico-daemonset-k8s-v1.16.yaml
new file mode 100644 (file)
index 0000000..1cbf14c
--- /dev/null
@@ -0,0 +1,641 @@
+# yamllint disable
+# This is a modified Calico daemonset.
+# it is based on: https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+  name: calico-config
+  namespace: kube-system
+data:
+  typha_service_name: "none"
+  calico_backend: "bird"
+  veth_mtu: "1440"
+  cni_network_config: |-
+    {
+      "name": "k8s-pod-network",
+      "cniVersion": "0.3.0",
+      "plugins": [
+        {
+          "type": "calico",
+          "log_level": "info",
+          "datastore_type": "kubernetes",
+          "nodename": "__KUBERNETES_NODE_NAME__",
+          "mtu": __CNI_MTU__,
+          "ipam": {
+            "type": "calico-ipam"
+          },
+          "policy": {
+              "type": "k8s"
+          },
+          "kubernetes": {
+              "kubeconfig": "__KUBECONFIG_FILEPATH__"
+          }
+        },
+        {
+          "type": "portmap",
+          "snat": true,
+          "capabilities": {"portMappings": true}
+        }
+      ]
+    }
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+   name: felixconfigurations.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: FelixConfiguration
+    plural: felixconfigurations
+    singular: felixconfiguration
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: ipamblocks.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPAMBlock
+    plural: ipamblocks
+    singular: ipamblock
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: blockaffinities.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: BlockAffinity
+    plural: blockaffinities
+    singular: blockaffinity
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: ipamhandles.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPAMHandle
+    plural: ipamhandles
+    singular: ipamhandle
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: ipamconfigs.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPAMConfig
+    plural: ipamconfigs
+    singular: ipamconfig
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: bgppeers.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: BGPPeer
+    plural: bgppeers
+    singular: bgppeer
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: bgpconfigurations.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: BGPConfiguration
+    plural: bgpconfigurations
+    singular: bgpconfiguration
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: ippools.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPPool
+    plural: ippools
+    singular: ippool
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: hostendpoints.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: HostEndpoint
+    plural: hostendpoints
+    singular: hostendpoint
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: clusterinformations.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: ClusterInformation
+    plural: clusterinformations
+    singular: clusterinformation
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: globalnetworkpolicies.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: GlobalNetworkPolicy
+    plural: globalnetworkpolicies
+    singular: globalnetworkpolicy
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: globalnetworksets.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: GlobalNetworkSet
+    plural: globalnetworksets
+    singular: globalnetworkset
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: networkpolicies.crd.projectcalico.org
+spec:
+  scope: Namespaced
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: NetworkPolicy
+    plural: networkpolicies
+    singular: networkpolicy
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: calico-kube-controllers
+rules:
+  - apiGroups: [""]
+    resources:
+      - nodes
+    verbs:
+      - watch
+      - list
+      - get
+  - apiGroups: [""]
+    resources:
+      - pods
+    verbs:
+      - get
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - ippools
+    verbs:
+      - list
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - blockaffinities
+      - ipamblocks
+      - ipamhandles
+    verbs:
+      - get
+      - list
+      - create
+      - update
+      - delete
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - clusterinformations
+    verbs:
+      - get
+      - create
+      - update
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: calico-kube-controllers
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: calico-kube-controllers
+subjects:
+- kind: ServiceAccount
+  name: calico-kube-controllers
+  namespace: kube-system
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: calico-node
+rules:
+  - apiGroups: [""]
+    resources:
+      - pods
+      - nodes
+      - namespaces
+    verbs:
+      - get
+  - apiGroups: [""]
+    resources:
+      - endpoints
+      - services
+    verbs:
+      - watch
+      - list
+      - get
+  - apiGroups: [""]
+    resources:
+      - nodes/status
+    verbs:
+      - patch
+      - update
+  - apiGroups: ["networking.k8s.io"]
+    resources:
+      - networkpolicies
+    verbs:
+      - watch
+      - list
+  - apiGroups: [""]
+    resources:
+      - pods
+      - namespaces
+      - serviceaccounts
+    verbs:
+      - list
+      - watch
+  - apiGroups: [""]
+    resources:
+      - pods/status
+    verbs:
+      - patch
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - globalfelixconfigs
+      - felixconfigurations
+      - bgppeers
+      - globalbgpconfigs
+      - bgpconfigurations
+      - ippools
+      - ipamblocks
+      - globalnetworkpolicies
+      - globalnetworksets
+      - networkpolicies
+      - clusterinformations
+      - hostendpoints
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - ippools
+      - felixconfigurations
+      - clusterinformations
+    verbs:
+      - create
+      - update
+  - apiGroups: [""]
+    resources:
+      - nodes
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - bgpconfigurations
+      - bgppeers
+    verbs:
+      - create
+      - update
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - blockaffinities
+      - ipamblocks
+      - ipamhandles
+    verbs:
+      - get
+      - list
+      - create
+      - update
+      - delete
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - ipamconfigs
+    verbs:
+      - get
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - blockaffinities
+    verbs:
+      - watch
+  - apiGroups: ["apps"]
+    resources:
+      - daemonsets
+    verbs:
+      - get
+---
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: ClusterRoleBinding
+metadata:
+  name: calico-node
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: calico-node
+subjects:
+- kind: ServiceAccount
+  name: calico-node
+  namespace: kube-system
+---
+kind: DaemonSet
+apiVersion: apps/v1
+metadata:
+  name: calico-node
+  namespace: kube-system
+  labels:
+    k8s-app: calico-node
+spec:
+  selector:
+    matchLabels:
+      k8s-app: calico-node
+  updateStrategy:
+    type: RollingUpdate
+    rollingUpdate:
+      maxUnavailable: 1
+  template:
+    metadata:
+      labels:
+        k8s-app: calico-node
+      annotations:
+        scheduler.alpha.kubernetes.io/critical-pod: ''
+    spec:
+      nodeSelector:
+        beta.kubernetes.io/os: linux
+      hostNetwork: true
+      tolerations:
+        - effect: NoSchedule
+          operator: Exists
+        - key: CriticalAddonsOnly
+          operator: Exists
+        - effect: NoExecute
+          operator: Exists
+      serviceAccountName: calico-node
+      terminationGracePeriodSeconds: 0
+      initContainers:
+        - name: upgrade-ipam
+          image: calico/cni:v3.6.1
+          command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
+          env:
+            - name: KUBERNETES_NODE_NAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: spec.nodeName
+            - name: CALICO_NETWORKING_BACKEND
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: calico_backend
+          volumeMounts:
+            - mountPath: /var/lib/cni/networks
+              name: host-local-net-dir
+            - mountPath: /host/opt/cni/bin
+              name: cni-bin-dir
+        - name: install-cni
+          image: calico/cni:v3.6.1
+          command: ["/install-cni.sh"]
+          env:
+            - name: CNI_CONF_NAME
+              value: "10-calico.conflist"
+            - name: CNI_NETWORK_CONFIG
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: cni_network_config
+            - name: KUBERNETES_NODE_NAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: spec.nodeName
+            - name: CNI_MTU
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: veth_mtu
+            - name: SLEEP
+              value: "false"
+          volumeMounts:
+            - mountPath: /host/opt/cni/bin
+              name: cni-bin-dir
+            - mountPath: /host/etc/cni/net.d
+              name: cni-net-dir
+      containers:
+        - name: calico-node
+          image: calico/node:v3.6.1
+          env:
+            # Use Kubernetes API as the backing datastore.
+            - name: DATASTORE_TYPE
+              value: "kubernetes"
+            # Wait for the datastore.
+            - name: WAIT_FOR_DATASTORE
+              value: "true"
+            # Set based on the k8s node name.
+            - name: NODENAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: spec.nodeName
+            # Choose the backend to use.
+            - name: CALICO_NETWORKING_BACKEND
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: calico_backend
+            # Cluster type to identify the deployment type
+            - name: CLUSTER_TYPE
+              value: "k8s,bgp"
+            # Auto-detect the BGP IP address.
+            - name: IP
+              value: "autodetect"
+            - name: IP_AUTODETECTION_METHOD
+              value: "can-reach=www.google.com"
+            # Enable IPIP
+            - name: CALICO_IPV4POOL_IPIP
+              value: "Always"
+            # Set MTU for tunnel device used if ipip is enabled
+            - name: FELIX_IPINIPMTU
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: veth_mtu
+            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
+            # chosen from this range. Changing this value after installation will have
+            # no effect. This should fall within `--cluster-cidr`.
+            - name: CALICO_IPV4POOL_CIDR
+              value: "10.244.0.0/16"
+            # Disable file logging so `kubectl logs` works.
+            - name: CALICO_DISABLE_FILE_LOGGING
+              value: "true"
+            # Set Felix endpoint to host default action to ACCEPT.
+            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
+              value: "ACCEPT"
+            # Disable IPv6 on Kubernetes.
+            - name: FELIX_IPV6SUPPORT
+              value: "false"
+            # Set Felix logging to "info"
+            - name: FELIX_LOGSEVERITYSCREEN
+              value: "info"
+            - name: FELIX_HEALTHENABLED
+              value: "true"
+          securityContext:
+            privileged: true
+          resources:
+            requests:
+              cpu: 250m
+          livenessProbe:
+            httpGet:
+              path: /liveness
+              port: 9099
+              host: localhost
+            periodSeconds: 10
+            initialDelaySeconds: 10
+            failureThreshold: 6
+          readinessProbe:
+            exec:
+              command:
+              - /bin/calico-node
+              - -bird-ready
+              - -felix-ready
+            periodSeconds: 10
+          volumeMounts:
+            - mountPath: /lib/modules
+              name: lib-modules
+              readOnly: true
+            - mountPath: /run/xtables.lock
+              name: xtables-lock
+              readOnly: false
+            - mountPath: /var/run/calico
+              name: var-run-calico
+              readOnly: false
+            - mountPath: /var/lib/calico
+              name: var-lib-calico
+              readOnly: false
+      volumes:
+        - name: lib-modules
+          hostPath:
+            path: /lib/modules
+        - name: var-run-calico
+          hostPath:
+            path: /var/run/calico
+        - name: var-lib-calico
+          hostPath:
+            path: /var/lib/calico
+        - name: xtables-lock
+          hostPath:
+            path: /run/xtables.lock
+            type: FileOrCreate
+        - name: cni-bin-dir
+          hostPath:
+            path: /opt/cni/bin
+        - name: cni-net-dir
+          hostPath:
+            # NOTE: moved to tmp so we can see what it attempts to write
+            path: /etc/cni/multus/calico/net.d
+        - name: host-local-net-dir
+          hostPath:
+            path: /var/lib/cni/networks
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: calico-node
+  namespace: kube-system
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: calico-kube-controllers
+  namespace: kube-system
+  labels:
+    k8s-app: calico-kube-controllers
+  annotations:
+    scheduler.alpha.kubernetes.io/critical-pod: ''
+spec:
+  selector:
+    matchLabels:
+      k8s-app: calico-kube-controllers
+  replicas: 1
+  strategy:
+    type: Recreate
+  template:
+    metadata:
+      name: calico-kube-controllers
+      namespace: kube-system
+      labels:
+        k8s-app: calico-kube-controllers
+    spec:
+      nodeSelector:
+        beta.kubernetes.io/os: linux
+      tolerations:
+        - key: CriticalAddonsOnly
+          operator: Exists
+        - key: node-role.kubernetes.io/master
+          effect: NoSchedule
+      serviceAccountName: calico-kube-controllers
+      containers:
+        - name: calico-kube-controllers
+          image: calico/kube-controllers:v3.6.1
+          env:
+            - name: ENABLED_CONTROLLERS
+              value: node
+            - name: DATASTORE_TYPE
+              value: kubernetes
+          readinessProbe:
+            exec:
+              command:
+              - /usr/bin/check-status
+              - -r
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: calico-kube-controllers
+  namespace: kube-system
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/calico-daemonset.yaml b/src/foundation/scripts/cni/multus/multus-sriov-calico/calico-daemonset.yaml
new file mode 100644 (file)
index 0000000..dedb813
--- /dev/null
@@ -0,0 +1,638 @@
+# yamllint disable
+# This is a modified Calico daemonset.
+# it is based on: https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+  name: calico-config
+  namespace: kube-system
+data:
+  typha_service_name: "none"
+  calico_backend: "bird"
+  veth_mtu: "1440"
+  cni_network_config: |-
+    {
+      "name": "k8s-pod-network",
+      "cniVersion": "0.3.0",
+      "plugins": [
+        {
+          "type": "calico",
+          "log_level": "info",
+          "datastore_type": "kubernetes",
+          "nodename": "__KUBERNETES_NODE_NAME__",
+          "mtu": __CNI_MTU__,
+          "ipam": {
+            "type": "calico-ipam"
+          },
+          "policy": {
+              "type": "k8s"
+          },
+          "kubernetes": {
+              "kubeconfig": "__KUBECONFIG_FILEPATH__"
+          }
+        },
+        {
+          "type": "portmap",
+          "snat": true,
+          "capabilities": {"portMappings": true}
+        }
+      ]
+    }
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+   name: felixconfigurations.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: FelixConfiguration
+    plural: felixconfigurations
+    singular: felixconfiguration
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: ipamblocks.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPAMBlock
+    plural: ipamblocks
+    singular: ipamblock
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: blockaffinities.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: BlockAffinity
+    plural: blockaffinities
+    singular: blockaffinity
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: ipamhandles.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPAMHandle
+    plural: ipamhandles
+    singular: ipamhandle
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: ipamconfigs.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPAMConfig
+    plural: ipamconfigs
+    singular: ipamconfig
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: bgppeers.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: BGPPeer
+    plural: bgppeers
+    singular: bgppeer
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: bgpconfigurations.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: BGPConfiguration
+    plural: bgpconfigurations
+    singular: bgpconfiguration
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: ippools.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: IPPool
+    plural: ippools
+    singular: ippool
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: hostendpoints.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: HostEndpoint
+    plural: hostendpoints
+    singular: hostendpoint
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: clusterinformations.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: ClusterInformation
+    plural: clusterinformations
+    singular: clusterinformation
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: globalnetworkpolicies.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: GlobalNetworkPolicy
+    plural: globalnetworkpolicies
+    singular: globalnetworkpolicy
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: globalnetworksets.crd.projectcalico.org
+spec:
+  scope: Cluster
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: GlobalNetworkSet
+    plural: globalnetworksets
+    singular: globalnetworkset
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: networkpolicies.crd.projectcalico.org
+spec:
+  scope: Namespaced
+  group: crd.projectcalico.org
+  version: v1
+  names:
+    kind: NetworkPolicy
+    plural: networkpolicies
+    singular: networkpolicy
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: calico-kube-controllers
+rules:
+  - apiGroups: [""]
+    resources:
+      - nodes
+    verbs:
+      - watch
+      - list
+      - get
+  - apiGroups: [""]
+    resources:
+      - pods
+    verbs:
+      - get
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - ippools
+    verbs:
+      - list
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - blockaffinities
+      - ipamblocks
+      - ipamhandles
+    verbs:
+      - get
+      - list
+      - create
+      - update
+      - delete
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - clusterinformations
+    verbs:
+      - get
+      - create
+      - update
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: calico-kube-controllers
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: calico-kube-controllers
+subjects:
+- kind: ServiceAccount
+  name: calico-kube-controllers
+  namespace: kube-system
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: calico-node
+rules:
+  - apiGroups: [""]
+    resources:
+      - pods
+      - nodes
+      - namespaces
+    verbs:
+      - get
+  - apiGroups: [""]
+    resources:
+      - endpoints
+      - services
+    verbs:
+      - watch
+      - list
+      - get
+  - apiGroups: [""]
+    resources:
+      - nodes/status
+    verbs:
+      - patch
+      - update
+  - apiGroups: ["networking.k8s.io"]
+    resources:
+      - networkpolicies
+    verbs:
+      - watch
+      - list
+  - apiGroups: [""]
+    resources:
+      - pods
+      - namespaces
+      - serviceaccounts
+    verbs:
+      - list
+      - watch
+  - apiGroups: [""]
+    resources:
+      - pods/status
+    verbs:
+      - patch
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - globalfelixconfigs
+      - felixconfigurations
+      - bgppeers
+      - globalbgpconfigs
+      - bgpconfigurations
+      - ippools
+      - ipamblocks
+      - globalnetworkpolicies
+      - globalnetworksets
+      - networkpolicies
+      - clusterinformations
+      - hostendpoints
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - ippools
+      - felixconfigurations
+      - clusterinformations
+    verbs:
+      - create
+      - update
+  - apiGroups: [""]
+    resources:
+      - nodes
+    verbs:
+      - get
+      - list
+      - watch
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - bgpconfigurations
+      - bgppeers
+    verbs:
+      - create
+      - update
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - blockaffinities
+      - ipamblocks
+      - ipamhandles
+    verbs:
+      - get
+      - list
+      - create
+      - update
+      - delete
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - ipamconfigs
+    verbs:
+      - get
+  - apiGroups: ["crd.projectcalico.org"]
+    resources:
+      - blockaffinities
+    verbs:
+      - watch
+  - apiGroups: ["apps"]
+    resources:
+      - daemonsets
+    verbs:
+      - get
+---
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: ClusterRoleBinding
+metadata:
+  name: calico-node
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: calico-node
+subjects:
+- kind: ServiceAccount
+  name: calico-node
+  namespace: kube-system
+---
+kind: DaemonSet
+apiVersion: extensions/v1beta1
+metadata:
+  name: calico-node
+  namespace: kube-system
+  labels:
+    k8s-app: calico-node
+spec:
+  selector:
+    matchLabels:
+      k8s-app: calico-node
+  updateStrategy:
+    type: RollingUpdate
+    rollingUpdate:
+      maxUnavailable: 1
+  template:
+    metadata:
+      labels:
+        k8s-app: calico-node
+      annotations:
+        scheduler.alpha.kubernetes.io/critical-pod: ''
+    spec:
+      nodeSelector:
+        beta.kubernetes.io/os: linux
+      hostNetwork: true
+      tolerations:
+        - effect: NoSchedule
+          operator: Exists
+        - key: CriticalAddonsOnly
+          operator: Exists
+        - effect: NoExecute
+          operator: Exists
+      serviceAccountName: calico-node
+      terminationGracePeriodSeconds: 0
+      initContainers:
+        - name: upgrade-ipam
+          image: calico/cni:v3.6.1
+          command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
+          env:
+            - name: KUBERNETES_NODE_NAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: spec.nodeName
+            - name: CALICO_NETWORKING_BACKEND
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: calico_backend
+          volumeMounts:
+            - mountPath: /var/lib/cni/networks
+              name: host-local-net-dir
+            - mountPath: /host/opt/cni/bin
+              name: cni-bin-dir
+        - name: install-cni
+          image: calico/cni:v3.6.1
+          command: ["/install-cni.sh"]
+          env:
+            - name: CNI_CONF_NAME
+              value: "10-calico.conflist"
+            - name: CNI_NETWORK_CONFIG
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: cni_network_config
+            - name: KUBERNETES_NODE_NAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: spec.nodeName
+            - name: CNI_MTU
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: veth_mtu
+            - name: SLEEP
+              value: "false"
+          volumeMounts:
+            - mountPath: /host/opt/cni/bin
+              name: cni-bin-dir
+            - mountPath: /host/etc/cni/net.d
+              name: cni-net-dir
+      containers:
+        - name: calico-node
+          image: calico/node:v3.6.1
+          env:
+            # Use Kubernetes API as the backing datastore.
+            - name: DATASTORE_TYPE
+              value: "kubernetes"
+            # Wait for the datastore.
+            - name: WAIT_FOR_DATASTORE
+              value: "true"
+            # Set based on the k8s node name.
+            - name: NODENAME
+              valueFrom:
+                fieldRef:
+                  fieldPath: spec.nodeName
+            # Choose the backend to use.
+            - name: CALICO_NETWORKING_BACKEND
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: calico_backend
+            # Cluster type to identify the deployment type
+            - name: CLUSTER_TYPE
+              value: "k8s,bgp"
+            # Auto-detect the BGP IP address.
+            - name: IP
+              value: "autodetect"
+            - name: IP_AUTODETECTION_METHOD
+              value: "can-reach=www.google.com"
+            # Enable IPIP
+            - name: CALICO_IPV4POOL_IPIP
+              value: "Always"
+            # Set MTU for tunnel device used if ipip is enabled
+            - name: FELIX_IPINIPMTU
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: veth_mtu
+            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
+            # chosen from this range. Changing this value after installation will have
+            # no effect. This should fall within `--cluster-cidr`.
+            - name: CALICO_IPV4POOL_CIDR
+              value: "10.244.0.0/16"
+            # Disable file logging so `kubectl logs` works.
+            - name: CALICO_DISABLE_FILE_LOGGING
+              value: "true"
+            # Set Felix endpoint to host default action to ACCEPT.
+            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
+              value: "ACCEPT"
+            # Disable IPv6 on Kubernetes.
+            - name: FELIX_IPV6SUPPORT
+              value: "false"
+            # Set Felix logging to "info"
+            - name: FELIX_LOGSEVERITYSCREEN
+              value: "info"
+            - name: FELIX_HEALTHENABLED
+              value: "true"
+          securityContext:
+            privileged: true
+          resources:
+            requests:
+              cpu: 250m
+          livenessProbe:
+            httpGet:
+              path: /liveness
+              port: 9099
+              host: localhost
+            periodSeconds: 10
+            initialDelaySeconds: 10
+            failureThreshold: 6
+          readinessProbe:
+            exec:
+              command:
+              - /bin/calico-node
+              - -bird-ready
+              - -felix-ready
+            periodSeconds: 10
+          volumeMounts:
+            - mountPath: /lib/modules
+              name: lib-modules
+              readOnly: true
+            - mountPath: /run/xtables.lock
+              name: xtables-lock
+              readOnly: false
+            - mountPath: /var/run/calico
+              name: var-run-calico
+              readOnly: false
+            - mountPath: /var/lib/calico
+              name: var-lib-calico
+              readOnly: false
+      volumes:
+        - name: lib-modules
+          hostPath:
+            path: /lib/modules
+        - name: var-run-calico
+          hostPath:
+            path: /var/run/calico
+        - name: var-lib-calico
+          hostPath:
+            path: /var/lib/calico
+        - name: xtables-lock
+          hostPath:
+            path: /run/xtables.lock
+            type: FileOrCreate
+        - name: cni-bin-dir
+          hostPath:
+            path: /opt/cni/bin
+        - name: cni-net-dir
+          hostPath:
+            # NOTE: moved to tmp so we can see what it attempts to write
+            path: /etc/cni/multus/calico/net.d
+        - name: host-local-net-dir
+          hostPath:
+            path: /var/lib/cni/networks
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: calico-node
+  namespace: kube-system
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: calico-kube-controllers
+  namespace: kube-system
+  labels:
+    k8s-app: calico-kube-controllers
+  annotations:
+    scheduler.alpha.kubernetes.io/critical-pod: ''
+spec:
+  replicas: 1
+  strategy:
+    type: Recreate
+  template:
+    metadata:
+      name: calico-kube-controllers
+      namespace: kube-system
+      labels:
+        k8s-app: calico-kube-controllers
+    spec:
+      nodeSelector:
+        beta.kubernetes.io/os: linux
+      tolerations:
+        - key: CriticalAddonsOnly
+          operator: Exists
+        - key: node-role.kubernetes.io/master
+          effect: NoSchedule
+      serviceAccountName: calico-kube-controllers
+      containers:
+        - name: calico-kube-controllers
+          image: calico/kube-controllers:v3.6.1
+          env:
+            - name: ENABLED_CONTROLLERS
+              value: node
+            - name: DATASTORE_TYPE
+              value: kubernetes
+          readinessProbe:
+            exec:
+              command:
+              - /usr/bin/check-status
+              - -r
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: calico-kube-controllers
+  namespace: kube-system
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/configMap.yaml b/src/foundation/scripts/cni/multus/multus-sriov-calico/configMap.yaml
new file mode 100644 (file)
index 0000000..a2309ce
--- /dev/null
@@ -0,0 +1,29 @@
+# yamllint disable
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: sriovdp-config
+  namespace: kube-system
+data:
+  config.json: |
+    {
+        "resourceList": [{
+                "resourceName": "ps225_sriov_netdevice",
+                "selectors": {
+                    "vendors": ["14e4"],
+                    "devices": ["d800"],
+                    "drivers": ["bnxt_en"],
+                    "pfNames": ["enp8s0f0np0"]
+                }
+            },
+            {
+                "resourceName": "intel_sriov_netdevice",
+                "selectors": {
+                    "vendors": ["8086"],
+                    "devices": ["154c"],
+                    "drivers": ["i40evf"],
+                    "pfNames": ["enp12s0f0"]
+                }
+            }
+        ]
+    }
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/install-k8s-v1.16.sh b/src/foundation/scripts/cni/multus/multus-sriov-calico/install-k8s-v1.16.sh
new file mode 100755 (executable)
index 0000000..2f0af5e
--- /dev/null
@@ -0,0 +1,40 @@
+#!/bin/bash -ex
+# shellcheck disable=SC2016,SC2046
+
+function wait_for {
+  # Execute in a subshell to prevent local variable override during recursion
+  (
+    local total_attempts=$1; shift
+    local cmdstr=$*
+    local sleep_time=2
+    echo -e "\n[wait_for] Waiting for cmd to return success: ${cmdstr}"
+    # shellcheck disable=SC2034
+    for attempt in $(seq "${total_attempts}"); do
+      echo "[wait_for] Attempt ${attempt}/${total_attempts%.*} for: ${cmdstr}"
+      # shellcheck disable=SC2015
+      eval "${cmdstr}" && echo "[wait_for] OK: ${cmdstr}" && return 0 || true
+      sleep "${sleep_time}"
+    done
+    echo "[wait_for] ERROR: Failed after max attempts: ${cmdstr}"
+    return 1
+  )
+}
+
+kubectl create -f configMap.yaml
+wait_for 5 'test $(kubectl get configmap -n kube-system | grep sriovdp-config -c ) -eq 1'
+
+kubectl create -f multus-sriov-calico-daemonsets-k8s-v1.16.yaml
+wait_for 100 'test $(kubectl get pods -n kube-system | grep -e "kube-multus-ds" | grep "Running" -c) -ge 1'
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "kube-sriov-cni" | grep "Running" -c) -ge 1'
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "kube-sriov-device-plugin" | grep "Running" -c) -ge 1'
+
+kubectl create -f calico-daemonset-k8s-v1.16.yaml
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "calico-kube-controllers" | grep "Running" -c) -ge 1'
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "calico-node" | grep "Running" -c) -ge 1'
+
+kubectl create -f sriov-crd.yaml
+wait_for 5 'test $(kubectl get crd | grep -e "network-attachment-definitions" -c) -ge 1'
+
+sleep 2
+kubectl get node $(hostname) -o json | jq '.status.allocatable' || true
+kubectl get pods --all-namespaces
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/install.sh b/src/foundation/scripts/cni/multus/multus-sriov-calico/install.sh
new file mode 100755 (executable)
index 0000000..7fc90ff
--- /dev/null
@@ -0,0 +1,43 @@
+#!/bin/bash -ex
+# shellcheck disable=SC2016,SC2046
+
+function wait_for {
+  # Execute in a subshell to prevent local variable override during recursion
+  (
+    local total_attempts=$1; shift
+    local cmdstr=$*
+    local sleep_time=2
+    echo -e "\n[wait_for] Waiting for cmd to return success: ${cmdstr}"
+    # shellcheck disable=SC2034
+    for attempt in $(seq "${total_attempts}"); do
+      echo "[wait_for] Attempt ${attempt}/${total_attempts%.*} for: ${cmdstr}"
+      # shellcheck disable=SC2015
+      eval "${cmdstr}" && echo "[wait_for] OK: ${cmdstr}" && return 0 || true
+      sleep "${sleep_time}"
+    done
+    echo "[wait_for] ERROR: Failed after max attempts: ${cmdstr}"
+    return 1
+  )
+}
+
+
+kubectl create -f configMap.yaml
+wait_for 5 'test $(kubectl get configmap -n kube-system | grep sriovdp-config -c ) -eq 1'
+
+kubectl create -f multus-sriov-calico-daemonsets.yaml
+wait_for 100 'test $(kubectl get pods -n kube-system | grep -e "kube-multus-ds" | grep "Running" -c) -ge 1'
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "kube-sriov-cni" | grep "Running" -c) -ge 1'
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "kube-sriov-device-plugin" | grep "Running" -c) -ge 1'
+#kubectl create -f multus-sriov-calico-daemonsets-k8s-v1.16.yaml
+
+kubectl create -f calico-daemonset.yaml
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "calico-kube-controllers" | grep "Running" -c) -ge 1'
+wait_for 20 'test $(kubectl get pods -n kube-system | grep -e "calico-node" | grep "Running" -c) -ge 1'
+#kubectl create -f calico-daemonset-k8s-v1.16.yml
+
+kubectl create -f sriov-crd.yaml
+wait_for 5 'test $(kubectl get crd | grep -e "network-attachment-definitions" -c) -ge 1'
+
+sleep 2
+kubectl get node $(hostname) -o json | jq '.status.allocatable' || true
+kubectl get pods --all-namespaces
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/multus-sriov-calico-daemonsets-k8s-v1.16.yaml b/src/foundation/scripts/cni/multus/multus-sriov-calico/multus-sriov-calico-daemonsets-k8s-v1.16.yaml
new file mode 100644 (file)
index 0000000..0238e6d
--- /dev/null
@@ -0,0 +1,614 @@
+# yamllint disable
+# This yaml file contains necessary configuration to setup
+# a demo environment for Multus + SR-IOV, the config includes
+# the following pieces:
+# 1. Multus ConfigMap
+# 2. Network Plumbing Working Group Spec Version 1 CustomerResourceDefinition
+# 3. Multus ClusterRole & ClusterRoleBinding
+# 4. Multus & SR-IOV Device Plugin ServiceAccounts
+# 5. Multus & SR-IOV Device Plugin & SR-IOV CNI DaemonSets
+
+# Note: This yaml file will not create customer SR-IOV CRD
+# which will be specified in Pod spec annotation. Below is
+# an example of SR-IOV CRD:
+#
+# apiVersion: "k8s.cni.cncf.io/v1"
+# kind: NetworkAttachmentDefinition
+# metadata:
+#   name: sriov-net1
+#   annotations:
+#     k8s.v1.cni.cncf.io/resourceName: intel.com/sriov
+# spec:
+#   config: '{
+#       "type": "sriov",
+#        "name": "sriov-network",
+#       "ipam": {
+#               "type": "host-local",
+#               "subnet": "10.56.217.0/24",
+#               "routes": [{
+#                       "dst": "0.0.0.0/0"
+#               }],
+#               "gateway": "10.56.217.1"
+#       }
+#   }'
+
+# An example of Pod spec using above SR-IOV CRD:
+#
+# apiVersion: v1
+# kind: Pod
+# metadata:
+#   name: testpod1
+#   labels:
+#     env: test
+#   annotations:
+#     k8s.v1.cni.cncf.io/networks: sriov-net1
+# spec:
+#   containers:
+#   - name: appcntr1
+#     image: centos/tools
+#     imagePullPolicy: IfNotPresent
+#     command: [ "/bin/bash", "-c", "--" ]
+#     args: [ "while true; do sleep 300000; done;" ]
+#     resources:
+#       requests:
+#         intel.com/sriov: '1'
+#       limits:
+#        intel.com/sriov: '1'
+
+
+# --------------------------------------------------------------------
+
+# 1. Multus ConfigMap
+#
+# This configMap assumes that:
+# - Kubeconfig file is located at "/etc/kubernetes/admin.conf" on host
+# - Default master plugin for Multus is set to flannel
+#
+# Note: If either of above is not True in your environment
+# make sure they are properly set to the corrent values.
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+  name: multus-cni-config
+  namespace: kube-system
+  labels:
+    tier: node
+    app: multus
+data:
+  cni-conf.json: |
+    {
+      "name": "multus-cni-network",
+      "type": "multus",
+      "capabilities": {
+        "portMappings": true
+      },
+      "delegates": [
+        {
+          "cniVersion": "0.3.1",
+          "name": "default-cni-network",
+          "plugins": [
+            {
+              "name": "k8s-pod-network",
+              "cniVersion": "0.3.0",
+              "type": "calico",
+              "log_level": "info",
+              "datastore_type": "kubernetes",
+              "nodename": "__KUBERNETES_NODE_NAME__",
+              "mtu": 1440,
+              "ipam": {
+                "type": "calico-ipam"
+              },
+              "policy": {
+                "type": "k8s"
+              },
+              "kubernetes": {
+                "kubeconfig": "/etc/kubernetes/admin.conf"
+              }
+            },
+            {
+              "type": "portmap",
+              "snat": true,
+              "capabilities": {"portMappings": true}
+            }
+          ]
+        }
+      ],
+      "kubeconfig": "/etc/kubernetes/admin.conf"
+    }
+    #"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
+# 2. NPWG spec v1 Network Attachment Definition
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+  name: network-attachment-definitions.k8s.cni.cncf.io
+spec:
+  group: k8s.cni.cncf.io
+  scope: Namespaced
+  names:
+    plural: network-attachment-definitions
+    singular: network-attachment-definition
+    kind: NetworkAttachmentDefinition
+    shortNames:
+    - net-attach-def
+  versions:
+    - name: v1
+      served: true
+      storage: true
+      schema:
+        openAPIV3Schema:
+          type: object
+          properties:
+            spec:
+              type: object
+              properties:
+                config:
+                  type: string
+# 3.1 Multus Cluster Role
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: multus
+rules:
+  - apiGroups: ["k8s.cni.cncf.io"]
+    resources:
+      - '*'
+    verbs:
+      - '*'
+  - apiGroups:
+      - ""
+    resources:
+      - pods
+      - pods/status
+    verbs:
+      - get
+      - update
+
+# 3.2 Multus Cluster Role Binding
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: multus
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: multus
+subjects:
+- kind: ServiceAccount
+  name: multus
+  namespace: kube-system
+
+# 4.1 SR-IOV Device Plugin ServiceAccount
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: sriov-device-plugin
+  namespace: kube-system
+
+# 4.2 Multus ServiceAccount
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: multus
+  namespace: kube-system
+
+# 5.1 SR-IOV Device Plugin DaemonSet
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-device-plugin-amd64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriovdp
+spec:
+  selector:
+    matchLabels:
+      name: sriov-device-plugin
+  template:
+    metadata:
+      labels:
+        name: sriov-device-plugin
+        tier: node
+        app: sriovdp
+    spec:
+      hostNetwork: true
+      hostPID: true
+      nodeSelector:
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+              #- key: node-role.kubernetes.io/master
+              #        operator: Exists
+              #        effect: NoSchedule
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: sriov-device-plugin
+      containers:
+      - name: kube-sriovdp
+        #image: nfvpe/sriov-device-plugin
+        image: iecedge/sriov-device-plugin-amd64
+        imagePullPolicy: IfNotPresent
+        args:
+        - --log-dir=sriovdp
+        - --log-level=10
+        - --resource-prefix=arm.com
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: devicesock
+          mountPath: /var/lib/kubelet/
+          readOnly: false
+        - name: log
+          mountPath: /var/log
+        - name: config-volume
+          mountPath: /etc/pcidp
+      volumes:
+        - name: devicesock
+          hostPath:
+            path: /var/lib/kubelet/
+        - name: log
+          hostPath:
+            path: /var/log
+        - name: config-volume
+          configMap:
+            name: sriovdp-config
+            items:
+            - key: config.json
+              path: config.json
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-device-plugin-arm64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriovdp
+spec:
+  selector:
+    matchLabels:
+      name: sriov-device-plugin
+  template:
+    metadata:
+      labels:
+        name: sriov-device-plugin
+        tier: node
+        app: sriovdp
+    spec:
+      hostNetwork: true
+      hostPID: true
+      nodeSelector:
+        beta.kubernetes.io/arch: arm64
+      tolerations:
+              #- key: node-role.kubernetes.io/master
+              #        operator: Exists
+              #        effect: NoSchedule
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: sriov-device-plugin
+      containers:
+      - name: kube-sriovdp
+        #image: nfvpe/sriov-device-plugin
+        image: iecedge/sriov-device-plugin-arm64
+        imagePullPolicy: IfNotPresent
+        #imagePullPolicy: Never
+        args:
+        - --log-dir=sriovdp
+        - --log-level=10
+        - --resource-prefix=arm.com
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: devicesock
+          mountPath: /var/lib/kubelet/
+          readOnly: false
+        - name: log
+          mountPath: /var/log
+        - name: config-volume
+          mountPath: /etc/pcidp
+      volumes:
+        - name: devicesock
+          hostPath:
+            path: /var/lib/kubelet/
+        - name: log
+          hostPath:
+            path: /var/log
+        - name: config-volume
+          configMap:
+            name: sriovdp-config
+            items:
+            - key: config.json
+              path: config.json
+
+# 5.2 SR-IOV CNI DaemonSet
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-cni-ds-amd64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriov-cni
+spec:
+  selector:
+    matchLabels:
+      name: sriov-cni
+  template:
+    metadata:
+      labels:
+        name: sriov-cni
+        tier: node
+        app: sriov-cni
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+      - key: node-role.kubernetes.io/master
+        operator: Exists
+        effect: NoSchedule
+      containers:
+      - name: kube-sriov-cni
+        #image: nfvpe/sriov-cni:latest
+        image: iecedge/sriov-cni-amd64:latest
+        imagePullPolicy: IfNotPresent
+        securityContext:
+          privileged: true
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        volumeMounts:
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+      volumes:
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-cni-ds-arm64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriov-cni
+spec:
+  selector:
+    matchLabels:
+      name: sriov-cni        
+  template:
+    metadata:
+      labels:
+        name: sriov-cni
+        tier: node
+        app: sriov-cni
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: arm64
+      tolerations:
+              #- key: node-role.kubernetes.io/master
+              #        operator: Exists
+              #        effect: NoSchedule
+      - operator: Exists
+        effect: NoSchedule
+      containers:
+      - name: kube-sriov-cni
+        #image: nfvpe/sriov-cni-arm64:latest
+        image: iecedge/sriov-cni-arm64:latest
+        imagePullPolicy: IfNotPresent
+        securityContext:
+          privileged: true
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        volumeMounts:
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+      volumes:
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+
+# 5.3 Multus DaemonSet
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-multus-ds-amd64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: multus
+    name: multus
+spec:
+  selector:
+    matchLabels:
+      name: multus
+  updateStrategy:
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: multus
+        name: multus
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: multus
+      containers:
+      - name: kube-multus
+        #image: nfvpe/multus:v3.3
+        #- "--multus-conf-file=auto"
+        #- "--cni-version=0.3.1"
+        #image: nfvpe/multus:v3.4
+        image: iecedge/multus-amd64:v3.4
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: KUBERNETES_NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        command:
+        - /bin/bash
+        - -cex
+        - |
+          #!/bin/bash
+          sed "s|__KUBERNETES_NODE_NAME__|${KUBERNETES_NODE_NAME}|g" /tmp/multus-conf/70-multus.conf.template > /tmp/multus-conf/70-multus.conf
+          /entrypoint.sh \
+            --multus-conf-file=/tmp/multus-conf/70-multus.conf
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: cni
+          mountPath: /host/etc/cni/net.d
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+          #- name: multus-cfg
+          #mountPath: /tmp/multus-conf
+          #readOnly: false
+        - name: multus-cfg
+          mountPath: /tmp/multus-conf/70-multus.conf.template
+          subPath: "cni-conf.json"
+        - name: kubernetes-cfg-dir
+          mountPath: /etc/kubernetes
+      volumes:
+        - name: cni
+          hostPath:
+            path: /etc/cni/net.d
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+            #- name: multus-cfg
+            #configMap:
+            #name: multus-cni-config
+            #items:
+            #- key: cni-conf.json
+            #  path: 70-multus.conf.template
+        - name: multus-cfg
+          configMap:
+            name: multus-cni-config
+        - name: kubernetes-cfg-dir
+          hostPath:
+            path: /etc/kubernetes
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-multus-ds-arm64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: multus
+    name: multus
+spec:
+  selector:
+    matchLabels:
+      name: multus
+  updateStrategy:
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: multus
+        name: multus
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: arm64
+      tolerations:
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: multus
+      containers:
+      - name: kube-multus
+        #image: nfvpe/multus:v3.3
+        #image: iecedge/multus-arm64:latest
+        #- "--multus-conf-file=auto"
+        #- "--cni-version=0.3.1"
+        image: iecedge/multus-arm64:v3.4
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: KUBERNETES_NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        command:
+        - /bin/bash
+        - -cex
+        - |
+          #!/bin/bash
+          sed "s|__KUBERNETES_NODE_NAME__|${KUBERNETES_NODE_NAME}|g" /tmp/multus-conf/70-multus.conf.template > /tmp/multus-conf/70-multus.conf
+          /entrypoint.sh \
+            --multus-conf-file=/tmp/multus-conf/70-multus.conf
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: cni
+          mountPath: /host/etc/cni/net.d
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+          #- name: multus-cfg
+          #mountPath: /tmp/multus-conf
+          #readOnly: false
+        - name: multus-cfg
+          mountPath: /tmp/multus-conf/70-multus.conf.template
+          subPath: "cni-conf.json"
+        - name: kubernetes-cfg-dir
+          mountPath: /etc/kubernetes
+      volumes:
+        - name: cni
+          hostPath:
+            path: /etc/cni/net.d
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+            #- name: multus-cfg
+            #configMap:
+            #name: multus-cni-config
+            #items:
+            #- key: cni-conf.json
+            #  path: 70-multus.conf.template
+        - name: multus-cfg
+          configMap:
+            name: multus-cni-config
+        - name: kubernetes-cfg-dir
+          hostPath:
+            path: /etc/kubernetes
+
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/multus-sriov-calico-daemonsets.yaml b/src/foundation/scripts/cni/multus/multus-sriov-calico/multus-sriov-calico-daemonsets.yaml
new file mode 100644 (file)
index 0000000..bb84657
--- /dev/null
@@ -0,0 +1,592 @@
+# yamllint disable
+# This yaml file contains necessary configuration to setup
+# a demo environment for Multus + SR-IOV, the config includes
+# the following pieces:
+# 1. Multus ConfigMap
+# 2. Network Plumbing Working Group Spec Version 1 CustomerResourceDefinition
+# 3. Multus ClusterRole & ClusterRoleBinding
+# 4. Multus & SR-IOV Device Plugin ServiceAccounts
+# 5. Multus & SR-IOV Device Plugin & SR-IOV CNI DaemonSets
+
+# Note: This yaml file will not create customer SR-IOV CRD
+# which will be specified in Pod spec annotation. Below is
+# an example of SR-IOV CRD:
+#
+# apiVersion: "k8s.cni.cncf.io/v1"
+# kind: NetworkAttachmentDefinition
+# metadata:
+#   name: sriov-net1
+#   annotations:
+#     k8s.v1.cni.cncf.io/resourceName: intel.com/sriov
+# spec:
+#   config: '{
+#       "type": "sriov",
+#        "name": "sriov-network",
+#       "ipam": {
+#               "type": "host-local",
+#               "subnet": "10.56.217.0/24",
+#               "routes": [{
+#                       "dst": "0.0.0.0/0"
+#               }],
+#               "gateway": "10.56.217.1"
+#       }
+#   }'
+
+# An example of Pod spec using above SR-IOV CRD:
+#
+# apiVersion: v1
+# kind: Pod
+# metadata:
+#   name: testpod1
+#   labels:
+#     env: test
+#   annotations:
+#     k8s.v1.cni.cncf.io/networks: sriov-net1
+# spec:
+#   containers:
+#   - name: appcntr1
+#     image: centos/tools
+#     imagePullPolicy: IfNotPresent
+#     command: [ "/bin/bash", "-c", "--" ]
+#     args: [ "while true; do sleep 300000; done;" ]
+#     resources:
+#       requests:
+#         intel.com/sriov: '1'
+#       limits:
+#        intel.com/sriov: '1'
+
+
+# --------------------------------------------------------------------
+
+# 1. Multus ConfigMap
+#
+# This configMap assumes that:
+# - Kubeconfig file is located at "/etc/kubernetes/admin.conf" on host
+# - Default master plugin for Multus is set to flannel
+#
+# Note: If either of above is not True in your environment
+# make sure they are properly set to the corrent values.
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+  name: multus-cni-config
+  namespace: kube-system
+  labels:
+    tier: node
+    app: multus
+data:
+  cni-conf.json: |
+    {
+      "name": "multus-cni-network",
+      "type": "multus",
+      "capabilities": {
+        "portMappings": true
+      },
+      "delegates": [
+        {
+          "cniVersion": "0.3.1",
+          "name": "default-cni-network",
+          "plugins": [
+            {
+              "name": "k8s-pod-network",
+              "cniVersion": "0.3.0",
+              "type": "calico",
+              "log_level": "info",
+              "datastore_type": "kubernetes",
+              "nodename": "__KUBERNETES_NODE_NAME__",
+              "mtu": 1440,
+              "ipam": {
+                "type": "calico-ipam"
+              },
+              "policy": {
+                "type": "k8s"
+              },
+              "kubernetes": {
+                "kubeconfig": "/etc/kubernetes/admin.conf"
+              }
+            },
+            {
+              "type": "portmap",
+              "snat": true,
+              "capabilities": {"portMappings": true}
+            }
+          ]
+        }
+      ],
+      "kubeconfig": "/etc/kubernetes/admin.conf"
+    }
+    #"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
+# 2. NPWG spec v1 Network Attachment Definition
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: network-attachment-definitions.k8s.cni.cncf.io
+spec:
+  group: k8s.cni.cncf.io
+  version: v1
+  scope: Namespaced
+  names:
+    plural: network-attachment-definitions
+    singular: network-attachment-definition
+    kind: NetworkAttachmentDefinition
+    shortNames:
+    - net-attach-def
+  validation:
+    openAPIV3Schema:
+      properties:
+        spec:
+          properties:
+            config:
+                 type: string
+
+
+# 3.1 Multus Cluster Role
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: multus
+rules:
+  - apiGroups: ["k8s.cni.cncf.io"]
+    resources:
+      - '*'
+    verbs:
+      - '*'
+  - apiGroups:
+      - ""
+    resources:
+      - pods
+      - pods/status
+    verbs:
+      - get
+      - update
+
+# 3.2 Multus Cluster Role Binding
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: multus
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: multus
+subjects:
+- kind: ServiceAccount
+  name: multus
+  namespace: kube-system
+
+# 4.1 SR-IOV Device Plugin ServiceAccount
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: sriov-device-plugin
+  namespace: kube-system
+
+# 4.2 Multus ServiceAccount
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: multus
+  namespace: kube-system
+
+# 5.1 SR-IOV Device Plugin DaemonSet
+---
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-device-plugin-amd64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriovdp
+spec:
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: sriovdp
+    spec:
+      hostNetwork: true
+      hostPID: true
+      nodeSelector:
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+              #- key: node-role.kubernetes.io/master
+              #        operator: Exists
+              #        effect: NoSchedule
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: sriov-device-plugin
+      containers:
+      - name: kube-sriovdp
+        image: nfvpe/sriov-device-plugin
+        imagePullPolicy: IfNotPresent
+        args:
+        - --log-dir=sriovdp
+        - --log-level=10
+        - --resource-prefix=arm.com
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: devicesock
+          mountPath: /var/lib/kubelet/
+          readOnly: false
+        - name: log
+          mountPath: /var/log
+        - name: config-volume
+          mountPath: /etc/pcidp
+      volumes:
+        - name: devicesock
+          hostPath:
+            path: /var/lib/kubelet/
+        - name: log
+          hostPath:
+            path: /var/log
+        - name: config-volume
+          configMap:
+            name: sriovdp-config
+            items:
+            - key: config.json
+              path: config.json
+---
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-device-plugin-arm64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriovdp
+spec:
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: sriovdp
+    spec:
+      hostNetwork: true
+      hostPID: true
+      nodeSelector:
+        beta.kubernetes.io/arch: arm64
+      tolerations:
+              #- key: node-role.kubernetes.io/master
+              #        operator: Exists
+              #        effect: NoSchedule
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: sriov-device-plugin
+      containers:
+      - name: kube-sriovdp
+        #image: nfvpe/sriov-device-plugin
+        image: iecedge/sriov-device-plugin-arm64
+        imagePullPolicy: IfNotPresent
+        #imagePullPolicy: Never
+        args:
+        - --log-dir=sriovdp
+        - --log-level=10
+        - --resource-prefix=arm.com
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: devicesock
+          mountPath: /var/lib/kubelet/
+          readOnly: false
+        - name: log
+          mountPath: /var/log
+        - name: config-volume
+          mountPath: /etc/pcidp
+      volumes:
+        - name: devicesock
+          hostPath:
+            path: /var/lib/kubelet/
+        - name: log
+          hostPath:
+            path: /var/log
+        - name: config-volume
+          configMap:
+            name: sriovdp-config
+            items:
+            - key: config.json
+              path: config.json
+
+# 5.2 SR-IOV CNI DaemonSet
+---
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-cni-ds-amd64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriov-cni
+spec:
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: sriov-cni
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+      - key: node-role.kubernetes.io/master
+        operator: Exists
+        effect: NoSchedule
+      containers:
+      - name: kube-sriov-cni
+        image: nfvpe/sriov-cni:latest
+        imagePullPolicy: IfNotPresent
+        securityContext:
+          privileged: true
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        volumeMounts:
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+      volumes:
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+---
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+  name: kube-sriov-cni-ds-arm64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: sriov-cni
+spec:
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: sriov-cni
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: arm64
+      tolerations:
+              #- key: node-role.kubernetes.io/master
+              #        operator: Exists
+              #        effect: NoSchedule
+      - operator: Exists
+        effect: NoSchedule
+      containers:
+      - name: kube-sriov-cni
+        #image: nfvpe/sriov-cni-arm64:latest
+        image: iecedge/sriov-cni-arm64:latest
+        imagePullPolicy: IfNotPresent
+        securityContext:
+          privileged: true
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        volumeMounts:
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+      volumes:
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+
+# 5.3 Multus DaemonSet
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-multus-ds-amd64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: multus
+    name: multus
+spec:
+  selector:
+    matchLabels:
+      name: multus
+  updateStrategy:
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: multus
+        name: multus
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: multus
+      containers:
+      - name: kube-multus
+        #image: nfvpe/multus:v3.3
+        #- "--multus-conf-file=auto"
+        #- "--cni-version=0.3.1"
+        image: nfvpe/multus:v3.4
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: KUBERNETES_NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        command:
+        - /bin/bash
+        - -cex
+        - |
+          #!/bin/bash
+          sed "s|__KUBERNETES_NODE_NAME__|${KUBERNETES_NODE_NAME}|g" /tmp/multus-conf/70-multus.conf.template > /tmp/multus-conf/70-multus.conf
+          /entrypoint.sh \
+            --multus-conf-file=/tmp/multus-conf/70-multus.conf
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: cni
+          mountPath: /host/etc/cni/net.d
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+          #- name: multus-cfg
+          #mountPath: /tmp/multus-conf
+          #readOnly: false
+        - name: multus-cfg
+          mountPath: /tmp/multus-conf/70-multus.conf.template
+          subPath: "cni-conf.json"
+        - name: kubernetes-cfg-dir
+          mountPath: /etc/kubernetes
+      volumes:
+        - name: cni
+          hostPath:
+            path: /etc/cni/net.d
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+            #- name: multus-cfg
+            #configMap:
+            #name: multus-cni-config
+            #items:
+            #- key: cni-conf.json
+            #  path: 70-multus.conf.template
+        - name: multus-cfg
+          configMap:
+            name: multus-cni-config
+        - name: kubernetes-cfg-dir
+          hostPath:
+            path: /etc/kubernetes
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: kube-multus-ds-arm64
+  namespace: kube-system
+  labels:
+    tier: node
+    app: multus
+    name: multus
+spec:
+  selector:
+    matchLabels:
+      name: multus
+  updateStrategy:
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        tier: node
+        app: multus
+        name: multus
+    spec:
+      hostNetwork: true
+      nodeSelector:
+        beta.kubernetes.io/arch: arm64
+      tolerations:
+      - operator: Exists
+        effect: NoSchedule
+      serviceAccountName: multus
+      containers:
+      - name: kube-multus
+        #image: nfvpe/multus:v3.3
+        #image: iecedge/multus-arm64:latest
+        #- "--multus-conf-file=auto"
+        #- "--cni-version=0.3.1"
+        image: iecedge/multus-arm64:v3.4
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: KUBERNETES_NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        command:
+        - /bin/bash
+        - -cex
+        - |
+          #!/bin/bash
+          sed "s|__KUBERNETES_NODE_NAME__|${KUBERNETES_NODE_NAME}|g" /tmp/multus-conf/70-multus.conf.template > /tmp/multus-conf/70-multus.conf
+          /entrypoint.sh \
+            --multus-conf-file=/tmp/multus-conf/70-multus.conf
+        resources:
+          requests:
+            cpu: "100m"
+            memory: "50Mi"
+          limits:
+            cpu: "100m"
+            memory: "50Mi"
+        securityContext:
+          privileged: true
+        volumeMounts:
+        - name: cni
+          mountPath: /host/etc/cni/net.d
+        - name: cnibin
+          mountPath: /host/opt/cni/bin
+          #- name: multus-cfg
+          #mountPath: /tmp/multus-conf
+          #readOnly: false
+        - name: multus-cfg
+          mountPath: /tmp/multus-conf/70-multus.conf.template
+          subPath: "cni-conf.json"
+        - name: kubernetes-cfg-dir
+          mountPath: /etc/kubernetes
+      volumes:
+        - name: cni
+          hostPath:
+            path: /etc/cni/net.d
+        - name: cnibin
+          hostPath:
+            path: /opt/cni/bin
+            #- name: multus-cfg
+            #configMap:
+            #name: multus-cni-config
+            #items:
+            #- key: cni-conf.json
+            #  path: 70-multus.conf.template
+        - name: multus-cfg
+          configMap:
+            name: multus-cni-config
+        - name: kubernetes-cfg-dir
+          hostPath:
+            path: /etc/kubernetes
+
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/sriov-crd.yaml b/src/foundation/scripts/cni/multus/multus-sriov-calico/sriov-crd.yaml
new file mode 100644 (file)
index 0000000..3502975
--- /dev/null
@@ -0,0 +1,24 @@
+# yamllint disable
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+  name: sriov-net1
+  annotations:
+    k8s.v1.cni.cncf.io/resourceName: arm.com/ps225_sriov_netdevice
+    #  "vlan": 1000,
+spec:
+  config: '{
+  "type": "sriov",
+  "cniVersion": "0.3.1",
+  "name": "sriov-network",
+  "ipam": {
+    "type": "host-local",
+    "subnet": "10.56.217.0/24",
+    "rangeStart": "10.56.217.11",
+    "rangeEnd": "10.56.217.181",
+    "routes": [{
+      "dst": "0.0.0.0/0"
+    }],
+    "gateway": "10.56.217.1"
+  }
+}'
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/uninstall-k8s-v1.16.sh b/src/foundation/scripts/cni/multus/multus-sriov-calico/uninstall-k8s-v1.16.sh
new file mode 100755 (executable)
index 0000000..fbaba0e
--- /dev/null
@@ -0,0 +1,15 @@
+#!/bin/bash
+# shellcheck disable=SC1073,SC1072,SC1039,SC2059,SC2046
+set -x
+
+kubectl delete -f sriov-crd.yaml
+sleep 2
+kubectl delete -f calico-daemonset-k8s-v1.16.yaml
+sleep 5
+kubectl delete -f multus-sriov-calico-daemonsets-k8s-v1.16.yaml
+sleep 5
+kubectl delete -f configMap.yaml
+sleep 2
+
+kubectl get node $(hostname) -o json | jq '.status.allocatable' || true
+kubectl get pods --all-namespaces
diff --git a/src/foundation/scripts/cni/multus/multus-sriov-calico/uninstall.sh b/src/foundation/scripts/cni/multus/multus-sriov-calico/uninstall.sh
new file mode 100755 (executable)
index 0000000..a0bcc79
--- /dev/null
@@ -0,0 +1,17 @@
+#!/bin/bash
+# shellcheck disable=SC1073,SC1072,SC1039,SC2059,SC2046
+set -x
+
+kubectl delete -f sriov-crd.yaml
+sleep 2
+kubectl delete -f calico-daemonset.yaml
+#kubectl delete -f calico-daemonset-k8s-v1.16.yaml
+sleep 5
+#kubectl delete -f multus-sriov-calico-daemonsets.yaml
+kubectl delete -f multus-sriov-calico-daemonsets-k8s-v1.16.yaml
+sleep 5
+kubectl delete -f configMap.yaml
+sleep 2
+
+kubectl get node $(hostname) -o json | jq '.status.allocatable' || true
+kubectl get pods --all-namespaces
index 76b963f..c29b6c0 100755 (executable)
@@ -84,6 +84,16 @@ install_multus_sriov_flannel(){
 
 }
 
 
 }
 
+install_multus_sriov_calico(){
+
+  sed -i "s@10.244.0.0/16@${POD_NETWORK_CIDR}@" \
+    "${SCRIPTS_DIR}/cni/multus/multus-sriov-calico/calico-daemonset.yaml"
+  # Install Multus Calico+SRIOV by yaml files
+  # shellcheck source=/dev/null
+  source ${SCRIPTS_DIR}/cni/multus/multus-sriov-calico/install.sh
+
+}
+
 install_danm(){
   ${SCRIPTS_DIR}/cni/danm/danm_install.sh
 
 install_danm(){
   ${SCRIPTS_DIR}/cni/danm/danm_install.sh
 
@@ -118,6 +128,10 @@ case ${CNI_TYPE} in
         echo "Install Flannel with SRIOV CNI by Multus-CNI ..."
         install_multus_sriov_flannel
         ;;
         echo "Install Flannel with SRIOV CNI by Multus-CNI ..."
         install_multus_sriov_flannel
         ;;
+ 'multus-calico-sriov')
+        echo "Install Calico with SRIOV CNI by Multus-CNI ..."
+        install_multus_sriov_calico
+        ;;
  'danm')
         echo "Install danm ..."
         install_danm
  'danm')
         echo "Install danm ..."
         install_danm