9 Due to the almost limitless number of possible hardware
10 configurations, this installation guide has chosen a concrete
11 configuration to use in the examples that follow.
13 > NOTE: The example configuration's BMC does not support Redfish
14 > virtual media, and therefore IPMI is used instead. When supported
15 > by the BMC, it is recommended to use the more secure Redfish virtual
16 > media option as shown [Quick start guide](quick-start.md).
18 The configuration contains the following three machines.
20 <table id="orgf44d94a" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
24 <col class="org-left" />
26 <col class="org-right" />
28 <col class="org-left" />
30 <col class="org-left" />
32 <col class="org-left" />
34 <col class="org-left" />
36 <col class="org-left" />
40 <th scope="col" class="org-left">Hostname</th>
41 <th scope="col" class="org-right">CPU Model</th>
42 <th scope="col" class="org-left">Memory</th>
43 <th scope="col" class="org-left">Storage</th>
44 <th scope="col" class="org-left">IPMI: IP/MAC, U/P</th>
45 <th scope="col" class="org-left">1GbE: NIC#, IP, MAC, VLAN, Network</th>
46 <th scope="col" class="org-left">10GbE: NIC#, IP, MAC, VLAN, Network</th>
52 <td class="org-left">pod11-node5</td>
53 <td class="org-right">2xE5-2699</td>
54 <td class="org-left">64GB</td>
55 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
56 <td class="org-left">IF0: 10.10.110.15 00:1e:67:fc:ff:18<br/>U/P: root/root</td>
57 <td class="org-left">IF0: 10.10.110.25 00:1e:67:fc:ff:16 VLAN 110<br/>IF1: 172.22.0.1 00:1e:67:fc:ff:17 VLAN 111</td>
58 <td class="org-left"> </td>
63 <td class="org-left">pod11-node3</td>
64 <td class="org-right">2xE5-2699</td>
65 <td class="org-left">64GB</td>
66 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
67 <td class="org-left">IF0: 10.10.110.13 00:1e:67:f1:5b:92<br/>U/P: root/root</td>
68 <td class="org-left">IF0: 10.10.110.23 00:1e:67:f1:5b:90 VLAN 110<br/>IF1: 172.22.0.0/24 00:1e:67:f1:5b:91 VLAN 111</td>
69 <td class="org-left">IF3: 10.10.113.4 00:1e:67:f8:69:81 VLAN 113</td>
74 <td class="org-left">pod11-node2</td>
75 <td class="org-right">2xE5-2699</td>
76 <td class="org-left">64GB</td>
77 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
78 <td class="org-left">IF0: 10.10.110.12 00:1e:67:fe:f4:1b<br/>U/P: root/root</td>
79 <td class="org-left">IF0: 10.10.110.22 00:1e:67:fe:f4:19 VLAN 110<br/>IF1: 172.22.0.0/14 00:1e:67:fe:f4:1a VLAN 111</td>
80 <td class="org-left">IF3: 10.10.113.3 00:1e:67:f8:6a:41 VLAN 113</td>
85 `pod11-node5` will be the Local Controller or *jump server*. The other
86 two machines will form a two-node K8s cluster.
88 Recommended hardware requirements are servers with 64GB Memory, 32
89 CPUs and SR-IOV network cards.
91 The machines are connected in the following topology.
93 ![img](./pod11-topology.png "Topology")
95 There are three networks required by ICN:
97 - The `baremetal` network, used as the control plane for K8s and for
99 - The `provisioning` network, used during the infrastructure
100 provisioning (OS installation) phase.
101 - The `IPMI` network, also used during the infrastructure provisioning
104 In this configuration, the IPMI and baremetal interfaces share the
105 same port and network. Care has been taken to ensure that the IP
106 addresses do not conflict between the two interfaces.
108 There is an additional network connected to a high-speed switch:
110 - The `sriov` network, available for the application data plane.
115 #### Baseboard Management Controller (BMC) configuration
117 The BMC IP address should be statically assigned using the machine's
118 BMC tool or application.
120 To verify IPMI is configured correctly for each cluster machine, use
123 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
126 If the ipmitool output looks like the following, enable the *RMCP+
127 Cipher Suite3 Configuration* using the machine's BMC tool or application.
129 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
130 Error in open session response message : insufficient resources for session
131 Error: Unable to establish IPMI v2 / RMCP+ session
133 If the ipmitool output looks like the following, enable *IPMI over lan*
134 using the machine's BMC tool or application.
136 # ipmitool -I lan -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
137 Error: Unable to establish LAN session
139 Additional information on ipmitool may be found at [Configuring IPMI
141 ipmitool](https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool).
143 #### PXE Boot configuration
145 Each cluster machine must be configured to PXE boot from the interface
146 attached to the `provisioning` network.
148 One method of verifying PXE boot is configured correctly is to access
149 the remote console of the machine and observe the boot process. If
150 the machine is not attempting PXE boot or it is attempting to PXE boot
151 on the wrong interface, reboot the machine into the BIOS and select
152 the correct interface in the boot options.
154 Additional verification can be done on the jump server using the
155 tcpdump tool. The following command looks for DHCP or TFTP traffic
156 arriving on any interface. Replace `any` with the interface attached to
157 the provisioning network to verify end-to-end connectivity between the
158 jump server and cluster machine.
160 # tcpdump -i any port 67 or port 68 or port 69
162 If tcpdump does not show any traffic, verify that the any switches are
163 configured properly to forward PXE boot requests (i.e. VLAN
170 ### Configure the jump server
172 The jump server is required to be pre-installed with an OS. ICN
173 supports Ubuntu 20.04.
175 Before provisioning the jump server, first edit `user_config.sh` to
176 provide the name of the interface connected to the provisioning
179 # ip --brief link show
181 enp4s0f3 UP 00:1e:67:fc:ff:17 <BROADCAST,MULTICAST,UP,LOWER_UP>
185 export IRONIC_INTERFACE="enp4s0f3"
188 ### Install the jump server components
195 make clean_jump_server
203 Before proceeding with the configuration, a basic understanding of the
204 essential components used in ICN is required.
206 ![img](./sw-diagram.png "Software Overview")
210 [Flux](https://fluxcd.io/) is a tool for implementing GitOps workflows where infrastructure
211 and application configuration is committed to source control and
212 continuously deployed in a K8s cluster.
214 The important Flux resources ICN uses are:
216 - GitRepository, which describes where configuration data is committed
217 - HelmRelease, which describes an installation of a Helm chart
218 - Kustomization, which describes application of K8s resources
219 customized with a kustomization file
221 #### Cluster API (CAPI)
223 [Cluster API](https://cluster-api.sigs.k8s.io/) provides declarative APIs and tooling for provisioning,
224 upgrading, and operating K8s clusters.
226 There are a number of important CAPI resources that ICN uses. To ease
227 deployment, ICN captures the resources into a Helm chart.
229 #### Bare Metal Operator (BMO)
231 Central to CAPI are infrastructure and bootstrap providers. There are
232 pluggable components for configuring the OS and K8s installation
235 ICN uses the [Cluster API Provider Metal3 for Managed Bare Metal
236 Hardware](https://github.com/metal3-io/cluster-api-provider-metal3) for infrastructure provisioning, which in turn depends on the
237 [Metal3 Bare Metal Operator](https://github.com/metal3-io/baremetal-operator) to do the actual work. The Bare Metal
238 Operator uses [Ironic](https://ironicbaremetal.org/) to execute the low-level provisioning tasks.
240 Similar to the CAPI resources that ICN uses, ICN captures the Bare
241 Metal Operator resources it uses into a Helm chart.
246 > NOTE: To assist in the migration of R5 and earlier release's use from
247 > nodes.json and the Provisioning resource to the site YAML described
248 > below, a helper script is provided at tools/migration/to<sub>r6.sh</sub>.
250 #### Define the compute cluster
252 The first step in provisioning a site with ICN is to define the
253 desired day-0 configuration of the workload clusters.
255 A [configuration](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=tree;f=deploy/site/cluster-icn) containing all supported ICN components is available
256 in the ICN repository. End-users may use this as a base and add or
257 remove components as desired. Each YAML file in this configuration is
258 one of the Flux resources described in the overview: GitRepository,
259 HelmRelease, or Kustomization.
263 A site definition is composed of BMO and CAPI resources, describing
264 machines and clusters respectively. These resources are captured into
265 the ICN machine and cluster Helm charts. Defining the site is
266 therefore a matter of specifying the values needed by the charts.
268 ##### Site-specific Considerations
270 Documentation for the machine chart may be found in its [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/machine/values.yaml),
271 and documentation for the cluster chart may be found in its
272 [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/cluster/values.yaml). Please review those for more information; what follows
273 is some site-specific considerations to be aware of.
275 Note that there are a large number of ways to configure machines and
276 especially clusters; for large scale deployments it may make sense to
277 create custom charts to eliminate duplication in the values
280 ###### Control plane endpoint
282 The K8s control plane endpoint address must be provided to the cluster
285 For a highly-available control plane, this would typically be a
286 load-balanced virtual IP address. Configuration of an external load
287 balancer is out of scope for this document. The chart also provides
288 another mechanism to accomplish this using the VRRP protocol to assign
289 the control plane endpoint among the selected control plane nodes; see
290 the `keepalived` dictionary in the cluster chart values.
292 For a single control plane node with a static IP address, some care
293 must be taken to ensure that CAPI chooses the correct machine to
294 provision as the control plane node. To do this, add a label to the
295 `machineLabels` dictionary in the machine chart and specify a K8s match
296 expression in the `controlPlaneHostSelector` dictionary of the cluster
297 chart. Once done, the IP address of the labeled and selected machine
298 can be used as the control plane endpoint address.
300 ###### Static or dynamic baremetal network IPAM
302 The cluster and machine charts support either static or dynamic IPAM
303 in the baremetal network.
305 Dynamic IPAM is configured by specifying the `networks` dictionary in
306 the cluster chart. At least two entries must be included, the
307 `baremetal` and `provisioning` networks. Under each entry, provide the
308 predictable network interface name as the value of `interface` key.
310 Note that this is in the cluster chart and therefore is in the form of
311 a template for each machine used in the cluster. If the machines are
312 sufficiently different such that the same interface name is not used
313 on each machine, then the static approach below must be used instead.
315 Static IPAM is configured by specifying the `networks` dictionary in the
316 machine chart. At least two entries must be included, the `baremetal`
317 and `provisioning` networks. From the chart example values:
321 macAddress: 00:1e:67:fe:f4:19
322 # type is either ipv4 or ipv4_dhcp
324 # ipAddress is only valid for type ipv4
325 ipAddress: 10.10.110.21/24
326 # gateway is only valid for type ipv4
328 # nameservers is an array of DNS servers; only valid for type ipv4
329 nameservers: ["8.8.8.8"]
331 macAddress: 00:1e:67:fe:f4:1a
334 The provisioning network must always be type `ipv4_dhcp`.
336 In either the static or dynamic case additional networks may be
337 included, however the static assignment option for an individual
338 network exists only when the machine chart approach is used.
342 The first thing done is to create a `site.yaml` file containing a
343 Namespace to hold the site resources and a GitRepository pointing to
344 the ICN repository where the machine and cluster Helm charts are
347 Note that when definining multiple sites it is only necessary to apply
348 the Namespace and GitRepository once on the jump server managing the
357 apiVersion: source.toolkit.fluxcd.io/v1beta1
363 gitImplementation: go-git
368 url: https://gerrit.akraino.org/r/icn
370 ##### Define a machine
372 Important values in machine definition include:
374 - **machineName:** the host name of the machine
375 - **bmcAddress, bmcUsername, bmcPassword:** the bare metal controller
376 (e.g. IPMI) access values
378 Capture each machine's values into a HelmRelease in the site YAML:
381 apiVersion: helm.toolkit.fluxcd.io/v2beta1
390 chart: deploy/machine
396 machineName: pod11-node2
399 bmcAddress: ipmi://10.10.110.12
404 macAddress: 00:1e:67:fe:f4:19
406 ipAddress: 10.10.110.22/24
411 macAddress: 00:1e:67:fe:f4:1a
414 macAddress: 00:1e:67:f8:6a:40
416 ipAddress: 10.10.112.3/24
418 macAddress: 00:1e:67:f8:6a:41
420 ipAddress: 10.10.113.3/24
422 ##### Define a cluster
424 Important values in cluster definition include:
426 - **clusterName:** the name of the cluster
427 - **numControlPlaneMachines:** the number of control plane nodes
428 - **numWorkerMachines:** the number of worker nodes
429 - **controlPlaneEndpoint:** see [Site-specific Considerations](#site-specific-considerations) above
430 - **userData:** dictionary containing default username, password, and
432 - **flux:** dictionary containing location of day-0 configuration of
433 cluster; see [Define the compute cluster](#define-the-compute-cluster) above
435 Capture each cluster's values into a HelmRelease in the site YAML:
438 apiVersion: helm.toolkit.fluxcd.io/v2beta1
447 chart: deploy/cluster
456 controlPlaneEndpoint: 10.10.110.23
457 controlPlaneHostSelector:
464 hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
465 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
467 url: https://gerrit.akraino.org/r/icn
469 path: ./deploy/site/cluster-icn
471 ##### Encrypt secrets in site definition
473 This step is optional, but recommended to protect sensitive
474 information stored in the site definition. The site script is
475 configured to protect the `bmcPassword` and `hashedPassword` values.
477 Use an existing GPG key pair or create a new one, then encrypt the
478 secrets contained in the site YAML using site.sh. The public key and
479 SOPS configuration is created in the site YAML directory; these may be
480 used to encrypt (but not decrypt) future secrets.
482 # ./deploy/site/site.sh create-gpg-key site-secrets-key
483 # ./deploy/site/site.sh sops-encrypt-site site.yaml site-secrets-key
485 ##### Example site definitions
487 Refer to the [pod11 site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/pod11/site.yaml) and the [vm site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/vm/site.yaml) for complete
488 examples of site definitions for a static and dynamic baremetal
489 network respectively. These site definitions are for simple two
490 machine clusters used in ICN testing.
492 #### Inform the Flux controllers of the site definition
494 The final step is inform the jump server Flux controllers of the site
495 definition be creating three resources:
497 - a GitRepository containing the location where the site definition is
499 - a Secret holding the GPG private key used to encrypt the secrets in
501 - a Kustomization referencing the GitRepository, Secret, and path in
502 the repository where the site definition is located
504 This may be done with the help of the `site.sh` script:
506 # ./deploy/site/site.sh flux-create-site URL BRANCH PATH KEY_NAME
509 <a id="org6324e82"></a>
513 #### Monitoring progress
515 The overall status of the cluster deployment can be monitored with
518 # clusterctl -n metal3 describe cluster icn
519 NAME READY SEVERITY REASON SINCE MESSAGE
520 /icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
521 ├─ClusterInfrastructure - Metal3Cluster/icn
522 ├─ControlPlane - KubeadmControlPlane/icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
523 │ └─Machine/icn-9sp7z False Info WaitingForInfrastructure 4m17s 1 of 2 completed
524 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-khtsk
526 └─MachineDeployment/icn False Warning WaitingForAvailableMachines 4m49s Minimum availability requires 1 replicas, current 0 available
527 └─Machine/icn-6b8dfc7f6f-tmgv7 False Info WaitingForInfrastructure 4m49s 0 of 2 completed
528 ├─BootstrapConfig - KubeadmConfig/icn-workers-79pl9 False Info WaitingForControlPlaneAvailable 4m19s
529 └─MachineInfrastructure - Metal3Machine/icn-workers-m7vb8
531 The status of OS provisioning can be monitored by inspecting the
532 `BareMetalHost` resources.
534 # kubectl -n metal3 get bmh
535 NAME STATE CONSUMER ONLINE ERROR AGE
536 pod11-node2 inspecting true 5m15s
537 pod11-node3 inspecting true 5m14s
539 Once the OS is installed, the status of K8s provisioning can be
540 monitored by logging into the machine using the credentials from the
541 `userData` section of the site values and inspecting the cloud-init
544 root@pod11-node2:~# tail -f /var/log/cloud-init-output.log
546 Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Wed, 05 Jan 2022 01:34:41 +0000. Up 131.66 seconds.
547 Cloud-init v. 21.4-0ubuntu1~20.04.1 finished at Wed, 05 Jan 2022 01:34:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sda2]. Up 132.02 seconds
549 Once the cluster's control plane is ready, its kubeconfig can be
550 obtained with `clusterctl` and the status of the cluster can be
551 monitored with `kubectl`.
553 # clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
554 # kubectl --kubeconfig=icn-admin.conf get pods -A
555 NAMESPACE NAME READY STATUS RESTARTS AGE
556 emco db-emco-mongo-0 1/1 Running 0 15h
557 emco emco-etcd-0 1/1 Running 0 15h
560 #### Examining the deployment process
562 The deployment resources can be examined with the kubectl and helm
563 tools. The below example provides pointers to the resources in the
566 # kubectl -n flux-system get GitRepository
567 NAME URL READY STATUS AGE
568 icn-master https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
570 # kubectl -n flux-system get Kustomization
571 NAME READY STATUS AGE
572 icn-master-site-pod11 True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m4s
574 # kubectl -n metal3 get GitRepository
575 NAME URL READY STATUS AGE
576 icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m22s
578 # kubectl -n metal3 get HelmRelease
579 NAME READY STATUS AGE
580 cluster-icn True Release reconciliation succeeded 7m54s
581 pod11-node2 True Release reconciliation succeeded 7m54s
582 pod11-node3 True Release reconciliation succeeded 7m54s
584 # kubectl -n metal3 get HelmChart
585 NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
586 metal3-cluster-icn deploy/cluster * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
587 metal3-pod11-node2 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
588 metal3-pod11-node3 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
591 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
592 cluster-icn metal3 2 2022-01-05 01:03:51.075860871 +0000 UTC deployed cluster-0.1.0
593 pod11-node2 metal3 2 2022-01-05 01:03:49.365432 +0000 UTC deployed machine-0.1.0
594 pod11-node3 metal3 2 2022-01-05 01:03:49.463726617 +0000 UTC deployed machine-0.1.0
596 # helm -n metal3 get values --all cluster-icn
603 containerRuntime: containerd
604 containerdVersion: 1.4.11-1
605 controlPlaneEndpoint: 10.10.110.23
606 controlPlaneHostSelector:
609 controlPlanePrefix: 24
610 dockerVersion: 5:20.10.10~3-0~ubuntu-focal
613 path: ./deploy/site/cluster-icn
615 url: https://gerrit.akraino.org/r/icn
616 imageName: focal-server-cloudimg-amd64.img
618 kubeVersion: 1.21.6-00
619 numControlPlaneMachines: 1
621 podCidr: 10.244.64.0/18
623 hashedPassword: $6$rounds=10000$bhRsNADLl$BzCcBaQ7Tle9AizUHcMKN2fygyPMqBebOuvhApI8B.pELWyFUaAWRasPOz.5Gf9bvCihakRnBTwsi217n2qQs1
625 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
630 # helm -n metal3 get values --all pod11-node2
632 bmcAddress: ipmi://10.10.110.12
637 machineName: pod11-node2
641 ipAddress: 10.10.110.22/24
642 macAddress: 00:1e:67:fe:f4:19
647 macAddress: 00:1e:67:fe:f4:1a
650 ipAddress: 10.10.113.3/24
651 macAddress: 00:1e:67:f8:6a:41
654 # helm -n metal3 get values --all pod11-node3
656 bmcAddress: ipmi://10.10.110.13
661 machineName: pod11-node3
665 ipAddress: 10.10.110.23/24
666 macAddress: 00:1e:67:f1:5b:90
671 macAddress: 00:1e:67:f1:5b:91
674 ipAddress: 10.10.113.4/24
675 macAddress: 00:1e:67:f8:69:81
678 Once the workload cluster is ready, the deployment resources may be
681 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
682 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get GitRepository -A
683 NAMESPACE NAME URL READY STATUS AGE
684 emco emco https://github.com/open-ness/EMCO True Fetched revision: openness-21.03.06/18ec480f755119d54aa42c1bc3bd248dfd477165 16h
685 flux-system icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
686 kud kud https://gerrit.onap.org/r/multicloud/k8s True Fetched revision: master/8157bf63753839ce4e9006978816fad3f63ca2de 16h
688 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get Kustomization -A
689 NAMESPACE NAME READY STATUS AGE
690 flux-system icn-flux-sync True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
691 flux-system kata True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
693 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmRelease -A
694 NAMESPACE NAME READY STATUS AGE
695 emco db True Release reconciliation succeeded 16h
696 emco monitor True Release reconciliation succeeded 16h
697 emco podsecurity True Release reconciliation succeeded 16h
698 emco services True Release reconciliation succeeded 16h
699 emco tools True Release reconciliation succeeded 16h
700 kud cdi True Release reconciliation succeeded 16h
701 kud cdi-operator True Release reconciliation succeeded 16h
702 kud cpu-manager True Release reconciliation succeeded 16h
703 kud kubevirt True Release reconciliation succeeded 16h
704 kud kubevirt-operator True Release reconciliation succeeded 16h
705 kud multus-cni True Release reconciliation succeeded 16h
706 kud node-feature-discovery True Release reconciliation succeeded 16h
707 kud ovn4nfv True Release reconciliation succeeded 16h
708 kud ovn4nfv-network True Release reconciliation succeeded 16h
709 kud podsecurity True Release reconciliation succeeded 16h
710 kud qat-device-plugin True Release reconciliation succeeded 16h
711 kud sriov-network True Release reconciliation succeeded 16h
712 kud sriov-network-operator True Release reconciliation succeeded 16h
714 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmChart -A
715 NAMESPACE NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
716 emco emco-db deployments/helm/emcoOpenNESS/emco-db * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
717 emco emco-monitor deployments/helm/monitor * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
718 emco emco-services deployments/helm/emcoOpenNESS/emco-services * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
719 emco emco-tools deployments/helm/emcoOpenNESS/emco-tools * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
720 flux-system emco-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
721 flux-system kud-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
722 kud kud-cdi kud/deployment_infra/helm/cdi * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
723 kud kud-cdi-operator kud/deployment_infra/helm/cdi-operator * GitRepository kud True Fetched and packaged revision: 0.1.1 16h
724 kud kud-cpu-manager kud/deployment_infra/helm/cpu-manager * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
725 kud kud-kubevirt kud/deployment_infra/helm/kubevirt * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
726 kud kud-kubevirt-operator kud/deployment_infra/helm/kubevirt-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
727 kud kud-multus-cni kud/deployment_infra/helm/multus-cni * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
728 kud kud-node-feature-discovery kud/deployment_infra/helm/node-feature-discovery * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
729 kud kud-ovn4nfv kud/deployment_infra/helm/ovn4nfv * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
730 kud kud-ovn4nfv-network kud/deployment_infra/helm/ovn4nfv-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
731 kud kud-qat-device-plugin kud/deployment_infra/helm/qat-device-plugin * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
732 kud kud-sriov-network kud/deployment_infra/helm/sriov-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
733 kud kud-sriov-network-operator kud/deployment_infra/helm/sriov-network-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
735 root@pod11-node5:# helm --kubeconfig=icn-admin.conf ls -A
736 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
737 cdi kud 2 2022-01-05 01:54:28.39195226 +0000 UTC deployed cdi-0.1.0 v1.34.1
738 cdi-operator kud 2 2022-01-05 01:54:04.904465491 +0000 UTC deployed cdi-operator-0.1.1 v1.34.1
739 cpu-manager kud 2 2022-01-05 01:54:01.911819055 +0000 UTC deployed cpu-manager-0.1.0 v1.4.1-no-taint
740 db emco 2 2022-01-05 01:53:36.096690949 +0000 UTC deployed emco-db-0.1.0
741 kubevirt kud 2 2022-01-05 01:54:12.563840437 +0000 UTC deployed kubevirt-0.1.0 v0.41.0
742 kubevirt-operator kud 2 2022-01-05 01:53:59.190388299 +0000 UTC deployed kubevirt-operator-0.1.0 v0.41.0
743 monitor emco 2 2022-01-05 01:53:36.085180458 +0000 UTC deployed monitor-0.1.0 1.16.0
744 multus-cni kud 2 2022-01-05 01:54:03.494462704 +0000 UTC deployed multus-cni-0.1.0 v3.7
745 node-feature-discovery kud 2 2022-01-05 01:53:58.489616047 +0000 UTC deployed node-feature-discovery-0.1.0 v0.7.0
746 ovn4nfv kud 2 2022-01-05 01:54:07.488105774 +0000 UTC deployed ovn4nfv-0.1.0 v3.0.0
747 ovn4nfv-network kud 2 2022-01-05 01:54:31.79127155 +0000 UTC deployed ovn4nfv-network-0.1.0 v2.2.0
748 podsecurity kud 2 2022-01-05 01:53:37.400019369 +0000 UTC deployed podsecurity-0.1.0
749 podsecurity emco 2 2022-01-05 01:53:35.993351972 +0000 UTC deployed podsecurity-0.1.0
750 qat-device-plugin kud 2 2022-01-05 01:54:03.598022943 +0000 UTC deployed qat-device-plugin-0.1.0 0.19.0-kerneldrv
751 sriov-network kud 2 2022-01-05 01:54:31.695963579 +0000 UTC deployed sriov-network-0.1.0 4.8.0
752 sriov-network-operator kud 2 2022-01-05 01:54:07.787596951 +0000 UTC deployed sriov-network-operator-0.1.0 4.8.0
753 tools emco 2 2022-01-05 01:53:58.317119097 +0000 UTC deployed emco-tools-0.1.0
755 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get pods -A -o wide
756 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
757 emco db-emco-mongo-0 1/1 Running 0 16h 10.244.65.53 pod11-node2 <none> <none>
758 emco emco-etcd-0 1/1 Running 0 16h 10.244.65.57 pod11-node2 <none> <none>
759 emco monitor-monitor-74649c5c64-dxhfn 1/1 Running 0 16h 10.244.65.65 pod11-node2 <none> <none>
760 emco services-clm-7ff876dfc-vgncs 1/1 Running 3 16h 10.244.65.58 pod11-node2 <none> <none>
766 Basic self-tests of Kata, EMCO, and the other addons may be performed
767 with the `kata.sh` and `addons.sh` test scripts once the workload cluster
770 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/kata/kata.sh test
771 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/addons/addons.sh test
776 To destroy the workload cluster and deprovision its machines, it is
777 only necessary to delete the site Kustomization. Uninstallation
778 progress can be monitored similar to deployment with `clusterctl`,
779 examining the `BareMetalHost` resources, etc.
781 root@pod11-node5:# kubectl -n flux-system delete Kustomization icn-master-site-pod11